00:00:00.000 Started by upstream project "autotest-nightly" build number 4361 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3724 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.015 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.016 The recommended git tool is: git 00:00:00.016 using credential 00000000-0000-0000-0000-000000000002 00:00:00.019 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.031 Fetching changes from the remote Git repository 00:00:00.033 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.047 Using shallow fetch with depth 1 00:00:00.047 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.047 > git --version # timeout=10 00:00:00.063 > git --version # 'git version 2.39.2' 00:00:00.063 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.078 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.078 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:02.257 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:02.267 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:02.278 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:02.278 > git config core.sparsecheckout # timeout=10 00:00:02.287 > git read-tree -mu HEAD # timeout=10 00:00:02.300 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:02.317 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:02.318 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:02.536 [Pipeline] Start of Pipeline 00:00:02.547 [Pipeline] library 00:00:02.548 Loading library shm_lib@master 00:00:02.549 Library shm_lib@master is cached. Copying from home. 00:00:02.565 [Pipeline] node 00:00:02.576 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:02.577 [Pipeline] { 00:00:02.583 [Pipeline] catchError 00:00:02.584 [Pipeline] { 00:00:02.592 [Pipeline] wrap 00:00:02.597 [Pipeline] { 00:00:02.603 [Pipeline] stage 00:00:02.604 [Pipeline] { (Prologue) 00:00:02.615 [Pipeline] echo 00:00:02.616 Node: VM-host-WFP7 00:00:02.620 [Pipeline] cleanWs 00:00:02.627 [WS-CLEANUP] Deleting project workspace... 00:00:02.627 [WS-CLEANUP] Deferred wipeout is used... 00:00:02.633 [WS-CLEANUP] done 00:00:02.807 [Pipeline] setCustomBuildProperty 00:00:02.913 [Pipeline] httpRequest 00:00:03.233 [Pipeline] echo 00:00:03.234 Sorcerer 10.211.164.20 is alive 00:00:03.242 [Pipeline] retry 00:00:03.243 [Pipeline] { 00:00:03.255 [Pipeline] httpRequest 00:00:03.259 HttpMethod: GET 00:00:03.260 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.260 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.261 Response Code: HTTP/1.1 200 OK 00:00:03.261 Success: Status code 200 is in the accepted range: 200,404 00:00:03.262 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.407 [Pipeline] } 00:00:03.419 [Pipeline] // retry 00:00:03.425 [Pipeline] sh 00:00:03.705 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:03.718 [Pipeline] httpRequest 00:00:04.026 [Pipeline] echo 00:00:04.027 Sorcerer 10.211.164.20 is alive 00:00:04.036 [Pipeline] retry 00:00:04.038 [Pipeline] { 00:00:04.051 [Pipeline] httpRequest 00:00:04.055 HttpMethod: GET 00:00:04.056 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:04.056 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:04.057 Response Code: HTTP/1.1 200 OK 00:00:04.058 Success: Status code 200 is in the accepted range: 200,404 00:00:04.058 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:44.537 [Pipeline] } 00:00:44.555 [Pipeline] // retry 00:00:44.563 [Pipeline] sh 00:00:44.849 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:47.403 [Pipeline] sh 00:00:47.687 + git -C spdk log --oneline -n5 00:00:47.687 e01cb43b8 mk/spdk.common.mk sed the minor version 00:00:47.687 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:00:47.687 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:00:47.687 66289a6db build: use VERSION file for storing version 00:00:47.687 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:00:47.706 [Pipeline] writeFile 00:00:47.720 [Pipeline] sh 00:00:48.020 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:48.048 [Pipeline] sh 00:00:48.334 + cat autorun-spdk.conf 00:00:48.334 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.334 SPDK_RUN_ASAN=1 00:00:48.334 SPDK_RUN_UBSAN=1 00:00:48.334 SPDK_TEST_RAID=1 00:00:48.334 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:48.342 RUN_NIGHTLY=1 00:00:48.344 [Pipeline] } 00:00:48.358 [Pipeline] // stage 00:00:48.372 [Pipeline] stage 00:00:48.374 [Pipeline] { (Run VM) 00:00:48.387 [Pipeline] sh 00:00:48.672 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:48.672 + echo 'Start stage prepare_nvme.sh' 00:00:48.672 Start stage prepare_nvme.sh 00:00:48.672 + [[ -n 3 ]] 00:00:48.672 + disk_prefix=ex3 00:00:48.672 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:48.672 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:48.672 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:48.672 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:48.672 ++ SPDK_RUN_ASAN=1 00:00:48.672 ++ SPDK_RUN_UBSAN=1 00:00:48.672 ++ SPDK_TEST_RAID=1 00:00:48.672 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:48.672 ++ RUN_NIGHTLY=1 00:00:48.672 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:48.672 + nvme_files=() 00:00:48.672 + declare -A nvme_files 00:00:48.672 + backend_dir=/var/lib/libvirt/images/backends 00:00:48.673 + nvme_files['nvme.img']=5G 00:00:48.673 + nvme_files['nvme-cmb.img']=5G 00:00:48.673 + nvme_files['nvme-multi0.img']=4G 00:00:48.673 + nvme_files['nvme-multi1.img']=4G 00:00:48.673 + nvme_files['nvme-multi2.img']=4G 00:00:48.673 + nvme_files['nvme-openstack.img']=8G 00:00:48.673 + nvme_files['nvme-zns.img']=5G 00:00:48.673 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:48.673 + (( SPDK_TEST_FTL == 1 )) 00:00:48.673 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:48.673 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:48.673 + for nvme in "${!nvme_files[@]}" 00:00:48.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:48.673 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:48.673 + for nvme in "${!nvme_files[@]}" 00:00:48.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:48.673 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.673 + for nvme in "${!nvme_files[@]}" 00:00:48.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:48.673 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:48.673 + for nvme in "${!nvme_files[@]}" 00:00:48.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:48.673 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.673 + for nvme in "${!nvme_files[@]}" 00:00:48.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:48.673 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:48.673 + for nvme in "${!nvme_files[@]}" 00:00:48.673 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:48.933 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:48.933 + for nvme in "${!nvme_files[@]}" 00:00:48.933 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:48.933 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:48.933 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:48.933 + echo 'End stage prepare_nvme.sh' 00:00:48.933 End stage prepare_nvme.sh 00:00:48.946 [Pipeline] sh 00:00:49.231 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:49.231 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:00:49.231 00:00:49.231 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:49.231 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:49.231 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:49.231 HELP=0 00:00:49.231 DRY_RUN=0 00:00:49.231 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:49.231 NVME_DISKS_TYPE=nvme,nvme, 00:00:49.231 NVME_AUTO_CREATE=0 00:00:49.231 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:49.231 NVME_CMB=,, 00:00:49.231 NVME_PMR=,, 00:00:49.231 NVME_ZNS=,, 00:00:49.231 NVME_MS=,, 00:00:49.231 NVME_FDP=,, 00:00:49.231 SPDK_VAGRANT_DISTRO=fedora39 00:00:49.231 SPDK_VAGRANT_VMCPU=10 00:00:49.231 SPDK_VAGRANT_VMRAM=12288 00:00:49.231 SPDK_VAGRANT_PROVIDER=libvirt 00:00:49.231 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:49.231 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:49.231 SPDK_OPENSTACK_NETWORK=0 00:00:49.231 VAGRANT_PACKAGE_BOX=0 00:00:49.231 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:49.231 FORCE_DISTRO=true 00:00:49.231 VAGRANT_BOX_VERSION= 00:00:49.231 EXTRA_VAGRANTFILES= 00:00:49.231 NIC_MODEL=virtio 00:00:49.231 00:00:49.231 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:49.231 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:51.768 Bringing machine 'default' up with 'libvirt' provider... 00:00:52.028 ==> default: Creating image (snapshot of base box volume). 00:00:52.289 ==> default: Creating domain with the following settings... 00:00:52.289 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734179211_e899bcf38f54f9fbb3f3 00:00:52.289 ==> default: -- Domain type: kvm 00:00:52.289 ==> default: -- Cpus: 10 00:00:52.289 ==> default: -- Feature: acpi 00:00:52.289 ==> default: -- Feature: apic 00:00:52.289 ==> default: -- Feature: pae 00:00:52.289 ==> default: -- Memory: 12288M 00:00:52.289 ==> default: -- Memory Backing: hugepages: 00:00:52.289 ==> default: -- Management MAC: 00:00:52.289 ==> default: -- Loader: 00:00:52.289 ==> default: -- Nvram: 00:00:52.289 ==> default: -- Base box: spdk/fedora39 00:00:52.289 ==> default: -- Storage pool: default 00:00:52.289 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734179211_e899bcf38f54f9fbb3f3.img (20G) 00:00:52.289 ==> default: -- Volume Cache: default 00:00:52.289 ==> default: -- Kernel: 00:00:52.289 ==> default: -- Initrd: 00:00:52.289 ==> default: -- Graphics Type: vnc 00:00:52.289 ==> default: -- Graphics Port: -1 00:00:52.289 ==> default: -- Graphics IP: 127.0.0.1 00:00:52.289 ==> default: -- Graphics Password: Not defined 00:00:52.289 ==> default: -- Video Type: cirrus 00:00:52.289 ==> default: -- Video VRAM: 9216 00:00:52.289 ==> default: -- Sound Type: 00:00:52.289 ==> default: -- Keymap: en-us 00:00:52.289 ==> default: -- TPM Path: 00:00:52.289 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:52.289 ==> default: -- Command line args: 00:00:52.289 ==> default: -> value=-device, 00:00:52.289 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:52.289 ==> default: -> value=-drive, 00:00:52.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:52.289 ==> default: -> value=-device, 00:00:52.289 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.289 ==> default: -> value=-device, 00:00:52.289 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:52.289 ==> default: -> value=-drive, 00:00:52.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:52.289 ==> default: -> value=-device, 00:00:52.289 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.289 ==> default: -> value=-drive, 00:00:52.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:52.289 ==> default: -> value=-device, 00:00:52.289 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.289 ==> default: -> value=-drive, 00:00:52.289 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:52.289 ==> default: -> value=-device, 00:00:52.289 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:52.289 ==> default: Creating shared folders metadata... 00:00:52.549 ==> default: Starting domain. 00:00:53.928 ==> default: Waiting for domain to get an IP address... 00:01:12.020 ==> default: Waiting for SSH to become available... 00:01:12.020 ==> default: Configuring and enabling network interfaces... 00:01:17.302 default: SSH address: 192.168.121.3:22 00:01:17.302 default: SSH username: vagrant 00:01:17.302 default: SSH auth method: private key 00:01:19.900 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:28.059 ==> default: Mounting SSHFS shared folder... 00:01:29.970 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:29.970 ==> default: Checking Mount.. 00:01:31.876 ==> default: Folder Successfully Mounted! 00:01:31.876 ==> default: Running provisioner: file... 00:01:32.815 default: ~/.gitconfig => .gitconfig 00:01:33.074 00:01:33.074 SUCCESS! 00:01:33.074 00:01:33.074 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:33.074 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:33.074 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:33.074 00:01:33.084 [Pipeline] } 00:01:33.098 [Pipeline] // stage 00:01:33.108 [Pipeline] dir 00:01:33.108 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:33.110 [Pipeline] { 00:01:33.122 [Pipeline] catchError 00:01:33.124 [Pipeline] { 00:01:33.136 [Pipeline] sh 00:01:33.418 + vagrant ssh-config --host vagrant 00:01:33.418 + sed -ne /^Host/,$p 00:01:33.418 + tee ssh_conf 00:01:35.997 Host vagrant 00:01:35.997 HostName 192.168.121.3 00:01:35.997 User vagrant 00:01:35.997 Port 22 00:01:35.997 UserKnownHostsFile /dev/null 00:01:35.997 StrictHostKeyChecking no 00:01:35.997 PasswordAuthentication no 00:01:35.997 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:35.997 IdentitiesOnly yes 00:01:35.997 LogLevel FATAL 00:01:35.997 ForwardAgent yes 00:01:35.997 ForwardX11 yes 00:01:35.997 00:01:36.012 [Pipeline] withEnv 00:01:36.014 [Pipeline] { 00:01:36.027 [Pipeline] sh 00:01:36.310 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:36.310 source /etc/os-release 00:01:36.310 [[ -e /image.version ]] && img=$(< /image.version) 00:01:36.310 # Minimal, systemd-like check. 00:01:36.310 if [[ -e /.dockerenv ]]; then 00:01:36.310 # Clear garbage from the node's name: 00:01:36.310 # agt-er_autotest_547-896 -> autotest_547-896 00:01:36.310 # $HOSTNAME is the actual container id 00:01:36.310 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:36.310 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:36.310 # We can assume this is a mount from a host where container is running, 00:01:36.310 # so fetch its hostname to easily identify the target swarm worker. 00:01:36.310 container="$(< /etc/hostname) ($agent)" 00:01:36.310 else 00:01:36.310 # Fallback 00:01:36.310 container=$agent 00:01:36.310 fi 00:01:36.310 fi 00:01:36.310 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:36.310 00:01:36.580 [Pipeline] } 00:01:36.596 [Pipeline] // withEnv 00:01:36.604 [Pipeline] setCustomBuildProperty 00:01:36.619 [Pipeline] stage 00:01:36.622 [Pipeline] { (Tests) 00:01:36.638 [Pipeline] sh 00:01:36.922 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:37.195 [Pipeline] sh 00:01:37.478 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:37.753 [Pipeline] timeout 00:01:37.754 Timeout set to expire in 1 hr 30 min 00:01:37.756 [Pipeline] { 00:01:37.771 [Pipeline] sh 00:01:38.055 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:38.624 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:38.637 [Pipeline] sh 00:01:38.920 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:39.194 [Pipeline] sh 00:01:39.478 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:39.755 [Pipeline] sh 00:01:40.040 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:40.299 ++ readlink -f spdk_repo 00:01:40.299 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:40.299 + [[ -n /home/vagrant/spdk_repo ]] 00:01:40.299 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:40.299 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:40.299 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:40.299 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:40.299 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:40.299 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:40.299 + cd /home/vagrant/spdk_repo 00:01:40.299 + source /etc/os-release 00:01:40.299 ++ NAME='Fedora Linux' 00:01:40.299 ++ VERSION='39 (Cloud Edition)' 00:01:40.299 ++ ID=fedora 00:01:40.299 ++ VERSION_ID=39 00:01:40.299 ++ VERSION_CODENAME= 00:01:40.299 ++ PLATFORM_ID=platform:f39 00:01:40.299 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:40.299 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:40.299 ++ LOGO=fedora-logo-icon 00:01:40.299 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:40.299 ++ HOME_URL=https://fedoraproject.org/ 00:01:40.299 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:40.299 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:40.299 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:40.299 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:40.299 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:40.299 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:40.299 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:40.299 ++ SUPPORT_END=2024-11-12 00:01:40.299 ++ VARIANT='Cloud Edition' 00:01:40.299 ++ VARIANT_ID=cloud 00:01:40.299 + uname -a 00:01:40.299 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:40.299 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:40.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:40.866 Hugepages 00:01:40.866 node hugesize free / total 00:01:40.866 node0 1048576kB 0 / 0 00:01:40.866 node0 2048kB 0 / 0 00:01:40.866 00:01:40.866 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:40.866 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:40.866 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:40.866 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:40.866 + rm -f /tmp/spdk-ld-path 00:01:40.866 + source autorun-spdk.conf 00:01:40.866 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:40.866 ++ SPDK_RUN_ASAN=1 00:01:40.866 ++ SPDK_RUN_UBSAN=1 00:01:40.866 ++ SPDK_TEST_RAID=1 00:01:40.866 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:40.866 ++ RUN_NIGHTLY=1 00:01:40.866 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:40.866 + [[ -n '' ]] 00:01:40.866 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:40.866 + for M in /var/spdk/build-*-manifest.txt 00:01:40.866 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:40.866 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.866 + for M in /var/spdk/build-*-manifest.txt 00:01:40.866 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:40.866 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.866 + for M in /var/spdk/build-*-manifest.txt 00:01:40.866 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:40.866 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:40.866 ++ uname 00:01:40.866 + [[ Linux == \L\i\n\u\x ]] 00:01:40.866 + sudo dmesg -T 00:01:40.866 + sudo dmesg --clear 00:01:41.170 + dmesg_pid=5429 00:01:41.170 + [[ Fedora Linux == FreeBSD ]] 00:01:41.170 + sudo dmesg -Tw 00:01:41.170 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.170 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:41.170 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:41.170 + [[ -x /usr/src/fio-static/fio ]] 00:01:41.170 + export FIO_BIN=/usr/src/fio-static/fio 00:01:41.170 + FIO_BIN=/usr/src/fio-static/fio 00:01:41.170 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:41.170 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:41.170 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:41.170 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.170 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:41.170 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:41.170 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.170 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:41.170 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.170 12:27:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:41.170 12:27:40 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.170 12:27:40 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.170 12:27:40 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:41.170 12:27:40 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:41.170 12:27:40 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:41.170 12:27:40 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:41.170 12:27:40 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:41.170 12:27:40 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:41.170 12:27:40 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:41.170 12:27:40 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:41.170 12:27:40 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:41.170 12:27:40 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:41.170 12:27:40 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:41.170 12:27:40 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:41.170 12:27:40 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:41.170 12:27:40 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.170 12:27:40 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.170 12:27:40 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.170 12:27:40 -- paths/export.sh@5 -- $ export PATH 00:01:41.170 12:27:40 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:41.170 12:27:40 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:41.170 12:27:40 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:41.170 12:27:40 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734179260.XXXXXX 00:01:41.170 12:27:40 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734179260.SlQnfa 00:01:41.170 12:27:40 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:41.170 12:27:40 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:41.170 12:27:40 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:41.170 12:27:40 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:41.170 12:27:40 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:41.170 12:27:40 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:41.170 12:27:40 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:41.170 12:27:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:41.170 12:27:40 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:41.170 12:27:40 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:41.170 12:27:40 -- pm/common@17 -- $ local monitor 00:01:41.170 12:27:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.170 12:27:40 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:41.170 12:27:40 -- pm/common@25 -- $ sleep 1 00:01:41.170 12:27:40 -- pm/common@21 -- $ date +%s 00:01:41.170 12:27:40 -- pm/common@21 -- $ date +%s 00:01:41.170 12:27:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734179260 00:01:41.170 12:27:40 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734179260 00:01:41.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734179260_collect-vmstat.pm.log 00:01:41.429 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734179260_collect-cpu-load.pm.log 00:01:42.368 12:27:41 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:42.368 12:27:41 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:42.368 12:27:41 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:42.368 12:27:41 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:42.368 12:27:41 -- spdk/autobuild.sh@16 -- $ date -u 00:01:42.368 Sat Dec 14 12:27:41 PM UTC 2024 00:01:42.368 12:27:41 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:42.368 v25.01-rc1-2-ge01cb43b8 00:01:42.368 12:27:41 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:42.368 12:27:41 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:42.368 12:27:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.368 12:27:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.368 12:27:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.368 ************************************ 00:01:42.368 START TEST asan 00:01:42.368 ************************************ 00:01:42.368 using asan 00:01:42.368 12:27:41 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:42.368 00:01:42.368 real 0m0.000s 00:01:42.368 user 0m0.000s 00:01:42.368 sys 0m0.000s 00:01:42.368 12:27:41 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:42.368 12:27:41 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.368 ************************************ 00:01:42.368 END TEST asan 00:01:42.368 ************************************ 00:01:42.368 12:27:41 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:42.368 12:27:41 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:42.368 12:27:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:42.368 12:27:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:42.368 12:27:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:42.368 ************************************ 00:01:42.368 START TEST ubsan 00:01:42.368 ************************************ 00:01:42.368 using ubsan 00:01:42.368 12:27:41 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:42.368 00:01:42.368 real 0m0.000s 00:01:42.368 user 0m0.000s 00:01:42.368 sys 0m0.000s 00:01:42.368 12:27:41 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:42.368 12:27:41 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:42.368 ************************************ 00:01:42.368 END TEST ubsan 00:01:42.368 ************************************ 00:01:42.368 12:27:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:42.368 12:27:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:42.368 12:27:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:42.368 12:27:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:42.368 12:27:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:42.369 12:27:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:42.369 12:27:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:42.369 12:27:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:42.369 12:27:42 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:42.628 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:42.628 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:43.197 Using 'verbs' RDMA provider 00:01:59.038 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:17.140 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:17.140 Creating mk/config.mk...done. 00:02:17.140 Creating mk/cc.flags.mk...done. 00:02:17.140 Type 'make' to build. 00:02:17.141 12:28:14 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:17.141 12:28:14 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:17.141 12:28:14 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:17.141 12:28:14 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.141 ************************************ 00:02:17.141 START TEST make 00:02:17.141 ************************************ 00:02:17.141 12:28:14 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:27.138 The Meson build system 00:02:27.138 Version: 1.5.0 00:02:27.138 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:27.138 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:27.138 Build type: native build 00:02:27.138 Program cat found: YES (/usr/bin/cat) 00:02:27.138 Project name: DPDK 00:02:27.138 Project version: 24.03.0 00:02:27.138 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:27.138 C linker for the host machine: cc ld.bfd 2.40-14 00:02:27.138 Host machine cpu family: x86_64 00:02:27.138 Host machine cpu: x86_64 00:02:27.138 Message: ## Building in Developer Mode ## 00:02:27.138 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:27.138 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:27.138 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:27.138 Program python3 found: YES (/usr/bin/python3) 00:02:27.138 Program cat found: YES (/usr/bin/cat) 00:02:27.138 Compiler for C supports arguments -march=native: YES 00:02:27.138 Checking for size of "void *" : 8 00:02:27.138 Checking for size of "void *" : 8 (cached) 00:02:27.138 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:27.138 Library m found: YES 00:02:27.138 Library numa found: YES 00:02:27.138 Has header "numaif.h" : YES 00:02:27.138 Library fdt found: NO 00:02:27.138 Library execinfo found: NO 00:02:27.138 Has header "execinfo.h" : YES 00:02:27.138 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:27.138 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:27.138 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:27.138 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:27.138 Run-time dependency openssl found: YES 3.1.1 00:02:27.138 Run-time dependency libpcap found: YES 1.10.4 00:02:27.138 Has header "pcap.h" with dependency libpcap: YES 00:02:27.138 Compiler for C supports arguments -Wcast-qual: YES 00:02:27.138 Compiler for C supports arguments -Wdeprecated: YES 00:02:27.138 Compiler for C supports arguments -Wformat: YES 00:02:27.138 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:27.138 Compiler for C supports arguments -Wformat-security: NO 00:02:27.138 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:27.138 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:27.138 Compiler for C supports arguments -Wnested-externs: YES 00:02:27.138 Compiler for C supports arguments -Wold-style-definition: YES 00:02:27.138 Compiler for C supports arguments -Wpointer-arith: YES 00:02:27.138 Compiler for C supports arguments -Wsign-compare: YES 00:02:27.138 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:27.138 Compiler for C supports arguments -Wundef: YES 00:02:27.138 Compiler for C supports arguments -Wwrite-strings: YES 00:02:27.138 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:27.138 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:27.138 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:27.138 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:27.138 Program objdump found: YES (/usr/bin/objdump) 00:02:27.138 Compiler for C supports arguments -mavx512f: YES 00:02:27.138 Checking if "AVX512 checking" compiles: YES 00:02:27.138 Fetching value of define "__SSE4_2__" : 1 00:02:27.138 Fetching value of define "__AES__" : 1 00:02:27.138 Fetching value of define "__AVX__" : 1 00:02:27.138 Fetching value of define "__AVX2__" : 1 00:02:27.138 Fetching value of define "__AVX512BW__" : 1 00:02:27.138 Fetching value of define "__AVX512CD__" : 1 00:02:27.138 Fetching value of define "__AVX512DQ__" : 1 00:02:27.138 Fetching value of define "__AVX512F__" : 1 00:02:27.138 Fetching value of define "__AVX512VL__" : 1 00:02:27.138 Fetching value of define "__PCLMUL__" : 1 00:02:27.138 Fetching value of define "__RDRND__" : 1 00:02:27.138 Fetching value of define "__RDSEED__" : 1 00:02:27.138 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:27.138 Fetching value of define "__znver1__" : (undefined) 00:02:27.138 Fetching value of define "__znver2__" : (undefined) 00:02:27.138 Fetching value of define "__znver3__" : (undefined) 00:02:27.138 Fetching value of define "__znver4__" : (undefined) 00:02:27.138 Library asan found: YES 00:02:27.138 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:27.138 Message: lib/log: Defining dependency "log" 00:02:27.138 Message: lib/kvargs: Defining dependency "kvargs" 00:02:27.138 Message: lib/telemetry: Defining dependency "telemetry" 00:02:27.138 Library rt found: YES 00:02:27.138 Checking for function "getentropy" : NO 00:02:27.139 Message: lib/eal: Defining dependency "eal" 00:02:27.139 Message: lib/ring: Defining dependency "ring" 00:02:27.139 Message: lib/rcu: Defining dependency "rcu" 00:02:27.139 Message: lib/mempool: Defining dependency "mempool" 00:02:27.139 Message: lib/mbuf: Defining dependency "mbuf" 00:02:27.139 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:27.139 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:27.139 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:27.139 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:27.139 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:27.139 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:27.139 Compiler for C supports arguments -mpclmul: YES 00:02:27.139 Compiler for C supports arguments -maes: YES 00:02:27.139 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:27.139 Compiler for C supports arguments -mavx512bw: YES 00:02:27.139 Compiler for C supports arguments -mavx512dq: YES 00:02:27.139 Compiler for C supports arguments -mavx512vl: YES 00:02:27.139 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:27.139 Compiler for C supports arguments -mavx2: YES 00:02:27.139 Compiler for C supports arguments -mavx: YES 00:02:27.139 Message: lib/net: Defining dependency "net" 00:02:27.139 Message: lib/meter: Defining dependency "meter" 00:02:27.139 Message: lib/ethdev: Defining dependency "ethdev" 00:02:27.139 Message: lib/pci: Defining dependency "pci" 00:02:27.139 Message: lib/cmdline: Defining dependency "cmdline" 00:02:27.139 Message: lib/hash: Defining dependency "hash" 00:02:27.139 Message: lib/timer: Defining dependency "timer" 00:02:27.139 Message: lib/compressdev: Defining dependency "compressdev" 00:02:27.139 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:27.139 Message: lib/dmadev: Defining dependency "dmadev" 00:02:27.139 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:27.139 Message: lib/power: Defining dependency "power" 00:02:27.139 Message: lib/reorder: Defining dependency "reorder" 00:02:27.139 Message: lib/security: Defining dependency "security" 00:02:27.139 Has header "linux/userfaultfd.h" : YES 00:02:27.139 Has header "linux/vduse.h" : YES 00:02:27.139 Message: lib/vhost: Defining dependency "vhost" 00:02:27.139 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:27.139 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:27.139 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:27.139 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:27.139 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:27.139 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:27.139 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:27.139 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:27.139 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:27.139 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:27.139 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:27.139 Configuring doxy-api-html.conf using configuration 00:02:27.139 Configuring doxy-api-man.conf using configuration 00:02:27.139 Program mandb found: YES (/usr/bin/mandb) 00:02:27.139 Program sphinx-build found: NO 00:02:27.139 Configuring rte_build_config.h using configuration 00:02:27.139 Message: 00:02:27.139 ================= 00:02:27.139 Applications Enabled 00:02:27.139 ================= 00:02:27.139 00:02:27.139 apps: 00:02:27.139 00:02:27.139 00:02:27.139 Message: 00:02:27.139 ================= 00:02:27.139 Libraries Enabled 00:02:27.139 ================= 00:02:27.139 00:02:27.139 libs: 00:02:27.139 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:27.139 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:27.139 cryptodev, dmadev, power, reorder, security, vhost, 00:02:27.139 00:02:27.139 Message: 00:02:27.139 =============== 00:02:27.139 Drivers Enabled 00:02:27.139 =============== 00:02:27.139 00:02:27.139 common: 00:02:27.139 00:02:27.139 bus: 00:02:27.139 pci, vdev, 00:02:27.139 mempool: 00:02:27.139 ring, 00:02:27.139 dma: 00:02:27.139 00:02:27.139 net: 00:02:27.139 00:02:27.139 crypto: 00:02:27.139 00:02:27.139 compress: 00:02:27.139 00:02:27.139 vdpa: 00:02:27.139 00:02:27.139 00:02:27.139 Message: 00:02:27.139 ================= 00:02:27.139 Content Skipped 00:02:27.139 ================= 00:02:27.139 00:02:27.139 apps: 00:02:27.139 dumpcap: explicitly disabled via build config 00:02:27.139 graph: explicitly disabled via build config 00:02:27.139 pdump: explicitly disabled via build config 00:02:27.139 proc-info: explicitly disabled via build config 00:02:27.139 test-acl: explicitly disabled via build config 00:02:27.139 test-bbdev: explicitly disabled via build config 00:02:27.139 test-cmdline: explicitly disabled via build config 00:02:27.139 test-compress-perf: explicitly disabled via build config 00:02:27.139 test-crypto-perf: explicitly disabled via build config 00:02:27.139 test-dma-perf: explicitly disabled via build config 00:02:27.139 test-eventdev: explicitly disabled via build config 00:02:27.139 test-fib: explicitly disabled via build config 00:02:27.139 test-flow-perf: explicitly disabled via build config 00:02:27.139 test-gpudev: explicitly disabled via build config 00:02:27.139 test-mldev: explicitly disabled via build config 00:02:27.139 test-pipeline: explicitly disabled via build config 00:02:27.139 test-pmd: explicitly disabled via build config 00:02:27.139 test-regex: explicitly disabled via build config 00:02:27.139 test-sad: explicitly disabled via build config 00:02:27.139 test-security-perf: explicitly disabled via build config 00:02:27.139 00:02:27.139 libs: 00:02:27.139 argparse: explicitly disabled via build config 00:02:27.139 metrics: explicitly disabled via build config 00:02:27.139 acl: explicitly disabled via build config 00:02:27.139 bbdev: explicitly disabled via build config 00:02:27.139 bitratestats: explicitly disabled via build config 00:02:27.139 bpf: explicitly disabled via build config 00:02:27.139 cfgfile: explicitly disabled via build config 00:02:27.139 distributor: explicitly disabled via build config 00:02:27.139 efd: explicitly disabled via build config 00:02:27.139 eventdev: explicitly disabled via build config 00:02:27.139 dispatcher: explicitly disabled via build config 00:02:27.139 gpudev: explicitly disabled via build config 00:02:27.139 gro: explicitly disabled via build config 00:02:27.139 gso: explicitly disabled via build config 00:02:27.139 ip_frag: explicitly disabled via build config 00:02:27.139 jobstats: explicitly disabled via build config 00:02:27.139 latencystats: explicitly disabled via build config 00:02:27.139 lpm: explicitly disabled via build config 00:02:27.139 member: explicitly disabled via build config 00:02:27.139 pcapng: explicitly disabled via build config 00:02:27.139 rawdev: explicitly disabled via build config 00:02:27.139 regexdev: explicitly disabled via build config 00:02:27.139 mldev: explicitly disabled via build config 00:02:27.139 rib: explicitly disabled via build config 00:02:27.139 sched: explicitly disabled via build config 00:02:27.139 stack: explicitly disabled via build config 00:02:27.139 ipsec: explicitly disabled via build config 00:02:27.139 pdcp: explicitly disabled via build config 00:02:27.139 fib: explicitly disabled via build config 00:02:27.139 port: explicitly disabled via build config 00:02:27.139 pdump: explicitly disabled via build config 00:02:27.139 table: explicitly disabled via build config 00:02:27.139 pipeline: explicitly disabled via build config 00:02:27.139 graph: explicitly disabled via build config 00:02:27.139 node: explicitly disabled via build config 00:02:27.139 00:02:27.139 drivers: 00:02:27.139 common/cpt: not in enabled drivers build config 00:02:27.139 common/dpaax: not in enabled drivers build config 00:02:27.139 common/iavf: not in enabled drivers build config 00:02:27.139 common/idpf: not in enabled drivers build config 00:02:27.139 common/ionic: not in enabled drivers build config 00:02:27.139 common/mvep: not in enabled drivers build config 00:02:27.139 common/octeontx: not in enabled drivers build config 00:02:27.139 bus/auxiliary: not in enabled drivers build config 00:02:27.139 bus/cdx: not in enabled drivers build config 00:02:27.139 bus/dpaa: not in enabled drivers build config 00:02:27.139 bus/fslmc: not in enabled drivers build config 00:02:27.139 bus/ifpga: not in enabled drivers build config 00:02:27.139 bus/platform: not in enabled drivers build config 00:02:27.139 bus/uacce: not in enabled drivers build config 00:02:27.139 bus/vmbus: not in enabled drivers build config 00:02:27.139 common/cnxk: not in enabled drivers build config 00:02:27.139 common/mlx5: not in enabled drivers build config 00:02:27.139 common/nfp: not in enabled drivers build config 00:02:27.139 common/nitrox: not in enabled drivers build config 00:02:27.139 common/qat: not in enabled drivers build config 00:02:27.139 common/sfc_efx: not in enabled drivers build config 00:02:27.139 mempool/bucket: not in enabled drivers build config 00:02:27.139 mempool/cnxk: not in enabled drivers build config 00:02:27.139 mempool/dpaa: not in enabled drivers build config 00:02:27.139 mempool/dpaa2: not in enabled drivers build config 00:02:27.139 mempool/octeontx: not in enabled drivers build config 00:02:27.139 mempool/stack: not in enabled drivers build config 00:02:27.139 dma/cnxk: not in enabled drivers build config 00:02:27.139 dma/dpaa: not in enabled drivers build config 00:02:27.139 dma/dpaa2: not in enabled drivers build config 00:02:27.139 dma/hisilicon: not in enabled drivers build config 00:02:27.139 dma/idxd: not in enabled drivers build config 00:02:27.139 dma/ioat: not in enabled drivers build config 00:02:27.139 dma/skeleton: not in enabled drivers build config 00:02:27.139 net/af_packet: not in enabled drivers build config 00:02:27.139 net/af_xdp: not in enabled drivers build config 00:02:27.139 net/ark: not in enabled drivers build config 00:02:27.139 net/atlantic: not in enabled drivers build config 00:02:27.139 net/avp: not in enabled drivers build config 00:02:27.139 net/axgbe: not in enabled drivers build config 00:02:27.139 net/bnx2x: not in enabled drivers build config 00:02:27.139 net/bnxt: not in enabled drivers build config 00:02:27.139 net/bonding: not in enabled drivers build config 00:02:27.139 net/cnxk: not in enabled drivers build config 00:02:27.139 net/cpfl: not in enabled drivers build config 00:02:27.139 net/cxgbe: not in enabled drivers build config 00:02:27.139 net/dpaa: not in enabled drivers build config 00:02:27.139 net/dpaa2: not in enabled drivers build config 00:02:27.139 net/e1000: not in enabled drivers build config 00:02:27.139 net/ena: not in enabled drivers build config 00:02:27.139 net/enetc: not in enabled drivers build config 00:02:27.139 net/enetfec: not in enabled drivers build config 00:02:27.140 net/enic: not in enabled drivers build config 00:02:27.140 net/failsafe: not in enabled drivers build config 00:02:27.140 net/fm10k: not in enabled drivers build config 00:02:27.140 net/gve: not in enabled drivers build config 00:02:27.140 net/hinic: not in enabled drivers build config 00:02:27.140 net/hns3: not in enabled drivers build config 00:02:27.140 net/i40e: not in enabled drivers build config 00:02:27.140 net/iavf: not in enabled drivers build config 00:02:27.140 net/ice: not in enabled drivers build config 00:02:27.140 net/idpf: not in enabled drivers build config 00:02:27.140 net/igc: not in enabled drivers build config 00:02:27.140 net/ionic: not in enabled drivers build config 00:02:27.140 net/ipn3ke: not in enabled drivers build config 00:02:27.140 net/ixgbe: not in enabled drivers build config 00:02:27.140 net/mana: not in enabled drivers build config 00:02:27.140 net/memif: not in enabled drivers build config 00:02:27.140 net/mlx4: not in enabled drivers build config 00:02:27.140 net/mlx5: not in enabled drivers build config 00:02:27.140 net/mvneta: not in enabled drivers build config 00:02:27.140 net/mvpp2: not in enabled drivers build config 00:02:27.140 net/netvsc: not in enabled drivers build config 00:02:27.140 net/nfb: not in enabled drivers build config 00:02:27.140 net/nfp: not in enabled drivers build config 00:02:27.140 net/ngbe: not in enabled drivers build config 00:02:27.140 net/null: not in enabled drivers build config 00:02:27.140 net/octeontx: not in enabled drivers build config 00:02:27.140 net/octeon_ep: not in enabled drivers build config 00:02:27.140 net/pcap: not in enabled drivers build config 00:02:27.140 net/pfe: not in enabled drivers build config 00:02:27.140 net/qede: not in enabled drivers build config 00:02:27.140 net/ring: not in enabled drivers build config 00:02:27.140 net/sfc: not in enabled drivers build config 00:02:27.140 net/softnic: not in enabled drivers build config 00:02:27.140 net/tap: not in enabled drivers build config 00:02:27.140 net/thunderx: not in enabled drivers build config 00:02:27.140 net/txgbe: not in enabled drivers build config 00:02:27.140 net/vdev_netvsc: not in enabled drivers build config 00:02:27.140 net/vhost: not in enabled drivers build config 00:02:27.140 net/virtio: not in enabled drivers build config 00:02:27.140 net/vmxnet3: not in enabled drivers build config 00:02:27.140 raw/*: missing internal dependency, "rawdev" 00:02:27.140 crypto/armv8: not in enabled drivers build config 00:02:27.140 crypto/bcmfs: not in enabled drivers build config 00:02:27.140 crypto/caam_jr: not in enabled drivers build config 00:02:27.140 crypto/ccp: not in enabled drivers build config 00:02:27.140 crypto/cnxk: not in enabled drivers build config 00:02:27.140 crypto/dpaa_sec: not in enabled drivers build config 00:02:27.140 crypto/dpaa2_sec: not in enabled drivers build config 00:02:27.140 crypto/ipsec_mb: not in enabled drivers build config 00:02:27.140 crypto/mlx5: not in enabled drivers build config 00:02:27.140 crypto/mvsam: not in enabled drivers build config 00:02:27.140 crypto/nitrox: not in enabled drivers build config 00:02:27.140 crypto/null: not in enabled drivers build config 00:02:27.140 crypto/octeontx: not in enabled drivers build config 00:02:27.140 crypto/openssl: not in enabled drivers build config 00:02:27.140 crypto/scheduler: not in enabled drivers build config 00:02:27.140 crypto/uadk: not in enabled drivers build config 00:02:27.140 crypto/virtio: not in enabled drivers build config 00:02:27.140 compress/isal: not in enabled drivers build config 00:02:27.140 compress/mlx5: not in enabled drivers build config 00:02:27.140 compress/nitrox: not in enabled drivers build config 00:02:27.140 compress/octeontx: not in enabled drivers build config 00:02:27.140 compress/zlib: not in enabled drivers build config 00:02:27.140 regex/*: missing internal dependency, "regexdev" 00:02:27.140 ml/*: missing internal dependency, "mldev" 00:02:27.140 vdpa/ifc: not in enabled drivers build config 00:02:27.140 vdpa/mlx5: not in enabled drivers build config 00:02:27.140 vdpa/nfp: not in enabled drivers build config 00:02:27.140 vdpa/sfc: not in enabled drivers build config 00:02:27.140 event/*: missing internal dependency, "eventdev" 00:02:27.140 baseband/*: missing internal dependency, "bbdev" 00:02:27.140 gpu/*: missing internal dependency, "gpudev" 00:02:27.140 00:02:27.140 00:02:27.140 Build targets in project: 85 00:02:27.140 00:02:27.140 DPDK 24.03.0 00:02:27.140 00:02:27.140 User defined options 00:02:27.140 buildtype : debug 00:02:27.140 default_library : shared 00:02:27.140 libdir : lib 00:02:27.140 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:27.140 b_sanitize : address 00:02:27.140 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:27.140 c_link_args : 00:02:27.140 cpu_instruction_set: native 00:02:27.140 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:27.140 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:27.140 enable_docs : false 00:02:27.140 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:27.140 enable_kmods : false 00:02:27.140 max_lcores : 128 00:02:27.140 tests : false 00:02:27.140 00:02:27.140 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:27.140 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:27.140 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:27.140 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:27.140 [3/268] Linking static target lib/librte_kvargs.a 00:02:27.140 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:27.140 [5/268] Linking static target lib/librte_log.a 00:02:27.140 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.140 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.140 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:27.140 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:27.140 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:27.140 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:27.140 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:27.140 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:27.140 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:27.140 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:27.140 [16/268] Linking static target lib/librte_telemetry.a 00:02:27.140 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:27.140 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:27.140 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.400 [20/268] Linking target lib/librte_log.so.24.1 00:02:27.400 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:27.400 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:27.400 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:27.400 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:27.400 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:27.400 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:27.400 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:27.658 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:27.658 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:27.658 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:27.658 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:27.918 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.918 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:27.918 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:27.918 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:27.918 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:27.918 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.178 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:28.178 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:28.178 [40/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:28.178 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:28.178 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:28.178 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:28.178 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:28.178 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:28.437 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:28.437 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:28.437 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:28.697 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:28.697 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:28.697 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:28.697 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:28.697 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:28.697 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:28.956 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:28.956 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:28.956 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:28.956 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:28.956 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:29.216 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:29.216 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:29.216 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:29.216 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:29.216 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:29.476 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:29.476 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:29.476 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:29.735 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:29.735 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:29.735 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:29.735 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:29.735 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:29.735 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:29.995 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:29.995 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:29.995 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:29.995 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:29.995 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:30.257 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:30.257 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:30.257 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:30.257 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:30.257 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:30.517 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:30.517 [85/268] Linking static target lib/librte_eal.a 00:02:30.517 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:30.517 [87/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:30.517 [88/268] Linking static target lib/librte_ring.a 00:02:30.517 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:30.777 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:30.777 [91/268] Linking static target lib/librte_rcu.a 00:02:30.777 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:30.777 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:30.777 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:30.777 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:30.777 [96/268] Linking static target lib/librte_mempool.a 00:02:31.036 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.036 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:31.036 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:31.036 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.036 [101/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.296 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:31.296 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:31.296 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:31.296 [105/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:31.296 [106/268] Linking static target lib/librte_mbuf.a 00:02:31.296 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.296 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:31.556 [109/268] Linking static target lib/librte_meter.a 00:02:31.556 [110/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.556 [111/268] Linking static target lib/librte_net.a 00:02:31.556 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:31.816 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:31.816 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.816 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.816 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.816 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.816 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:32.385 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:32.385 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:32.385 [121/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.385 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:32.644 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:32.903 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.903 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.903 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:32.903 [127/268] Linking static target lib/librte_pci.a 00:02:32.903 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:32.903 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:32.903 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:33.163 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:33.163 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:33.163 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:33.163 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:33.163 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:33.163 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:33.163 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:33.163 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:33.163 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:33.163 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:33.163 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:33.163 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:33.163 [143/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.423 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:33.423 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:33.682 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:33.682 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:33.682 [148/268] Linking static target lib/librte_cmdline.a 00:02:33.682 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:33.682 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:33.682 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:33.941 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:33.941 [153/268] Linking static target lib/librte_timer.a 00:02:33.941 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:33.941 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:34.200 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:34.460 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:34.460 [158/268] Linking static target lib/librte_compressdev.a 00:02:34.460 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:34.460 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:34.460 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:34.460 [162/268] Linking static target lib/librte_hash.a 00:02:34.718 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.718 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:34.718 [165/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:34.718 [166/268] Linking static target lib/librte_dmadev.a 00:02:34.718 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:34.977 [168/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:34.977 [169/268] Linking static target lib/librte_ethdev.a 00:02:34.977 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:34.977 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.322 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:35.322 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:35.322 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.322 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.580 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:35.580 [177/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.580 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:35.580 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.838 [180/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.838 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:35.838 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:35.838 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:35.838 [184/268] Linking static target lib/librte_power.a 00:02:36.096 [185/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.096 [186/268] Linking static target lib/librte_cryptodev.a 00:02:36.096 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.096 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:36.355 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.355 [190/268] Linking static target lib/librte_reorder.a 00:02:36.355 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.355 [192/268] Linking static target lib/librte_security.a 00:02:36.355 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.922 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.922 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.180 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.180 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:37.180 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:37.180 [199/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.438 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.438 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:37.696 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:37.696 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:37.696 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:37.696 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:37.954 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:37.954 [207/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:37.954 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:37.954 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.212 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.212 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:38.212 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.212 [213/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.212 [214/268] Linking static target drivers/librte_bus_pci.a 00:02:38.212 [215/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:38.212 [216/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.212 [217/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.212 [218/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.470 [219/268] Linking static target drivers/librte_bus_vdev.a 00:02:38.470 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:38.470 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.728 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.728 [223/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.728 [224/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.728 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.728 [226/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.728 [227/268] Linking static target drivers/librte_mempool_ring.a 00:02:39.729 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.628 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.628 [230/268] Linking target lib/librte_eal.so.24.1 00:02:41.886 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:41.886 [232/268] Linking target lib/librte_pci.so.24.1 00:02:41.886 [233/268] Linking target lib/librte_timer.so.24.1 00:02:41.886 [234/268] Linking target lib/librte_ring.so.24.1 00:02:41.886 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:41.886 [236/268] Linking target lib/librte_meter.so.24.1 00:02:41.886 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:42.144 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:42.144 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:42.144 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:42.144 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:42.144 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:42.144 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:42.144 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:42.144 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:42.144 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.144 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.402 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:42.402 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.402 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:42.402 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:42.402 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:42.402 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:42.402 [254/268] Linking target lib/librte_net.so.24.1 00:02:42.659 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:42.659 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:42.659 [257/268] Linking target lib/librte_cmdline.so.24.1 00:02:42.659 [258/268] Linking target lib/librte_hash.so.24.1 00:02:42.659 [259/268] Linking target lib/librte_security.so.24.1 00:02:42.917 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:43.851 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.109 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:44.109 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:44.109 [264/268] Linking target lib/librte_power.so.24.1 00:02:45.042 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:45.042 [266/268] Linking static target lib/librte_vhost.a 00:02:47.567 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.567 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:47.567 INFO: autodetecting backend as ninja 00:02:47.567 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:09.484 CC lib/ut_mock/mock.o 00:03:09.484 CC lib/ut/ut.o 00:03:09.484 CC lib/log/log_flags.o 00:03:09.484 CC lib/log/log.o 00:03:09.484 CC lib/log/log_deprecated.o 00:03:09.484 LIB libspdk_ut_mock.a 00:03:09.484 LIB libspdk_ut.a 00:03:09.484 SO libspdk_ut_mock.so.6.0 00:03:09.484 SO libspdk_ut.so.2.0 00:03:09.484 LIB libspdk_log.a 00:03:09.484 SYMLINK libspdk_ut_mock.so 00:03:09.484 SYMLINK libspdk_ut.so 00:03:09.484 SO libspdk_log.so.7.1 00:03:09.484 SYMLINK libspdk_log.so 00:03:09.484 CXX lib/trace_parser/trace.o 00:03:09.484 CC lib/ioat/ioat.o 00:03:09.484 CC lib/util/base64.o 00:03:09.484 CC lib/util/bit_array.o 00:03:09.484 CC lib/util/cpuset.o 00:03:09.484 CC lib/util/crc16.o 00:03:09.484 CC lib/util/crc32.o 00:03:09.484 CC lib/util/crc32c.o 00:03:09.484 CC lib/dma/dma.o 00:03:09.484 CC lib/vfio_user/host/vfio_user_pci.o 00:03:09.484 CC lib/util/crc32_ieee.o 00:03:09.484 CC lib/util/crc64.o 00:03:09.484 CC lib/util/dif.o 00:03:09.484 CC lib/util/fd.o 00:03:09.484 LIB libspdk_dma.a 00:03:09.484 CC lib/vfio_user/host/vfio_user.o 00:03:09.484 SO libspdk_dma.so.5.0 00:03:09.484 CC lib/util/fd_group.o 00:03:09.484 CC lib/util/file.o 00:03:09.484 LIB libspdk_ioat.a 00:03:09.484 CC lib/util/hexlify.o 00:03:09.484 SYMLINK libspdk_dma.so 00:03:09.484 CC lib/util/iov.o 00:03:09.484 SO libspdk_ioat.so.7.0 00:03:09.484 CC lib/util/math.o 00:03:09.484 SYMLINK libspdk_ioat.so 00:03:09.484 CC lib/util/net.o 00:03:09.484 CC lib/util/pipe.o 00:03:09.484 CC lib/util/strerror_tls.o 00:03:09.484 CC lib/util/string.o 00:03:09.484 LIB libspdk_vfio_user.a 00:03:09.484 CC lib/util/uuid.o 00:03:09.484 CC lib/util/xor.o 00:03:09.484 SO libspdk_vfio_user.so.5.0 00:03:09.484 CC lib/util/zipf.o 00:03:09.484 CC lib/util/md5.o 00:03:09.484 SYMLINK libspdk_vfio_user.so 00:03:09.484 LIB libspdk_util.a 00:03:09.484 SO libspdk_util.so.10.1 00:03:09.484 LIB libspdk_trace_parser.a 00:03:09.484 SO libspdk_trace_parser.so.6.0 00:03:09.484 SYMLINK libspdk_util.so 00:03:09.484 SYMLINK libspdk_trace_parser.so 00:03:09.484 CC lib/rdma_utils/rdma_utils.o 00:03:09.484 CC lib/vmd/vmd.o 00:03:09.484 CC lib/vmd/led.o 00:03:09.484 CC lib/json/json_parse.o 00:03:09.484 CC lib/conf/conf.o 00:03:09.484 CC lib/json/json_write.o 00:03:09.484 CC lib/env_dpdk/env.o 00:03:09.484 CC lib/json/json_util.o 00:03:09.484 CC lib/env_dpdk/memory.o 00:03:09.484 CC lib/idxd/idxd.o 00:03:09.484 CC lib/idxd/idxd_user.o 00:03:09.484 LIB libspdk_conf.a 00:03:09.484 SO libspdk_conf.so.6.0 00:03:09.484 CC lib/env_dpdk/pci.o 00:03:09.484 CC lib/env_dpdk/init.o 00:03:09.484 LIB libspdk_json.a 00:03:09.484 SYMLINK libspdk_conf.so 00:03:09.484 CC lib/idxd/idxd_kernel.o 00:03:09.484 LIB libspdk_rdma_utils.a 00:03:09.484 SO libspdk_json.so.6.0 00:03:09.484 SO libspdk_rdma_utils.so.1.0 00:03:09.484 SYMLINK libspdk_json.so 00:03:09.484 SYMLINK libspdk_rdma_utils.so 00:03:09.484 CC lib/env_dpdk/threads.o 00:03:09.484 CC lib/env_dpdk/pci_ioat.o 00:03:09.484 CC lib/env_dpdk/pci_virtio.o 00:03:09.484 CC lib/rdma_provider/common.o 00:03:09.484 CC lib/jsonrpc/jsonrpc_server.o 00:03:09.484 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:09.484 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:09.484 CC lib/env_dpdk/pci_vmd.o 00:03:09.484 LIB libspdk_idxd.a 00:03:09.484 CC lib/env_dpdk/pci_idxd.o 00:03:09.484 SO libspdk_idxd.so.12.1 00:03:09.484 LIB libspdk_vmd.a 00:03:09.484 CC lib/env_dpdk/pci_event.o 00:03:09.484 SO libspdk_vmd.so.6.0 00:03:09.484 CC lib/jsonrpc/jsonrpc_client.o 00:03:09.484 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:09.484 CC lib/env_dpdk/sigbus_handler.o 00:03:09.484 SYMLINK libspdk_idxd.so 00:03:09.484 CC lib/env_dpdk/pci_dpdk.o 00:03:09.743 SYMLINK libspdk_vmd.so 00:03:09.743 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:09.743 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:09.743 LIB libspdk_rdma_provider.a 00:03:09.743 SO libspdk_rdma_provider.so.7.0 00:03:09.743 SYMLINK libspdk_rdma_provider.so 00:03:09.743 LIB libspdk_jsonrpc.a 00:03:10.001 SO libspdk_jsonrpc.so.6.0 00:03:10.001 SYMLINK libspdk_jsonrpc.so 00:03:10.568 CC lib/rpc/rpc.o 00:03:10.568 LIB libspdk_env_dpdk.a 00:03:10.568 SO libspdk_env_dpdk.so.15.1 00:03:10.826 LIB libspdk_rpc.a 00:03:10.826 SO libspdk_rpc.so.6.0 00:03:10.826 SYMLINK libspdk_env_dpdk.so 00:03:10.826 SYMLINK libspdk_rpc.so 00:03:11.093 CC lib/keyring/keyring.o 00:03:11.369 CC lib/keyring/keyring_rpc.o 00:03:11.369 CC lib/notify/notify_rpc.o 00:03:11.369 CC lib/notify/notify.o 00:03:11.369 CC lib/trace/trace_rpc.o 00:03:11.369 CC lib/trace/trace.o 00:03:11.369 CC lib/trace/trace_flags.o 00:03:11.369 LIB libspdk_notify.a 00:03:11.369 SO libspdk_notify.so.6.0 00:03:11.369 LIB libspdk_keyring.a 00:03:11.637 SYMLINK libspdk_notify.so 00:03:11.637 SO libspdk_keyring.so.2.0 00:03:11.638 LIB libspdk_trace.a 00:03:11.638 SYMLINK libspdk_keyring.so 00:03:11.638 SO libspdk_trace.so.11.0 00:03:11.638 SYMLINK libspdk_trace.so 00:03:12.206 CC lib/thread/iobuf.o 00:03:12.206 CC lib/thread/thread.o 00:03:12.206 CC lib/sock/sock.o 00:03:12.206 CC lib/sock/sock_rpc.o 00:03:12.772 LIB libspdk_sock.a 00:03:12.772 SO libspdk_sock.so.10.0 00:03:12.772 SYMLINK libspdk_sock.so 00:03:13.339 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:13.339 CC lib/nvme/nvme_ctrlr.o 00:03:13.339 CC lib/nvme/nvme_fabric.o 00:03:13.339 CC lib/nvme/nvme_ns_cmd.o 00:03:13.339 CC lib/nvme/nvme_pcie.o 00:03:13.339 CC lib/nvme/nvme_ns.o 00:03:13.339 CC lib/nvme/nvme_pcie_common.o 00:03:13.339 CC lib/nvme/nvme_qpair.o 00:03:13.339 CC lib/nvme/nvme.o 00:03:13.904 CC lib/nvme/nvme_quirks.o 00:03:13.904 LIB libspdk_thread.a 00:03:13.904 CC lib/nvme/nvme_transport.o 00:03:13.904 SO libspdk_thread.so.11.0 00:03:14.162 CC lib/nvme/nvme_discovery.o 00:03:14.162 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:14.162 SYMLINK libspdk_thread.so 00:03:14.162 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:14.162 CC lib/nvme/nvme_tcp.o 00:03:14.162 CC lib/nvme/nvme_opal.o 00:03:14.422 CC lib/accel/accel.o 00:03:14.422 CC lib/nvme/nvme_io_msg.o 00:03:14.680 CC lib/accel/accel_rpc.o 00:03:14.680 CC lib/blob/blobstore.o 00:03:14.680 CC lib/blob/request.o 00:03:14.938 CC lib/blob/zeroes.o 00:03:14.938 CC lib/blob/blob_bs_dev.o 00:03:14.938 CC lib/nvme/nvme_poll_group.o 00:03:14.938 CC lib/accel/accel_sw.o 00:03:14.938 CC lib/nvme/nvme_zns.o 00:03:15.197 CC lib/nvme/nvme_stubs.o 00:03:15.197 CC lib/nvme/nvme_auth.o 00:03:15.197 CC lib/nvme/nvme_cuse.o 00:03:15.455 CC lib/nvme/nvme_rdma.o 00:03:15.712 LIB libspdk_accel.a 00:03:15.712 SO libspdk_accel.so.16.0 00:03:15.712 SYMLINK libspdk_accel.so 00:03:15.712 CC lib/init/json_config.o 00:03:15.969 CC lib/virtio/virtio.o 00:03:15.969 CC lib/fsdev/fsdev.o 00:03:15.969 CC lib/bdev/bdev.o 00:03:15.969 CC lib/fsdev/fsdev_io.o 00:03:16.228 CC lib/fsdev/fsdev_rpc.o 00:03:16.228 CC lib/init/subsystem.o 00:03:16.228 CC lib/virtio/virtio_vhost_user.o 00:03:16.228 CC lib/virtio/virtio_vfio_user.o 00:03:16.228 CC lib/virtio/virtio_pci.o 00:03:16.228 CC lib/init/subsystem_rpc.o 00:03:16.488 CC lib/bdev/bdev_rpc.o 00:03:16.488 CC lib/init/rpc.o 00:03:16.488 CC lib/bdev/bdev_zone.o 00:03:16.488 CC lib/bdev/part.o 00:03:16.746 CC lib/bdev/scsi_nvme.o 00:03:16.746 LIB libspdk_fsdev.a 00:03:16.746 LIB libspdk_init.a 00:03:16.746 LIB libspdk_virtio.a 00:03:16.746 SO libspdk_fsdev.so.2.0 00:03:16.746 SO libspdk_init.so.6.0 00:03:16.747 SO libspdk_virtio.so.7.0 00:03:16.747 SYMLINK libspdk_init.so 00:03:16.747 SYMLINK libspdk_fsdev.so 00:03:16.747 SYMLINK libspdk_virtio.so 00:03:17.313 LIB libspdk_nvme.a 00:03:17.313 CC lib/event/app.o 00:03:17.313 CC lib/event/reactor.o 00:03:17.313 CC lib/event/log_rpc.o 00:03:17.313 CC lib/event/app_rpc.o 00:03:17.313 CC lib/event/scheduler_static.o 00:03:17.313 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:17.313 SO libspdk_nvme.so.15.0 00:03:17.879 SYMLINK libspdk_nvme.so 00:03:17.879 LIB libspdk_event.a 00:03:17.879 SO libspdk_event.so.14.0 00:03:17.879 SYMLINK libspdk_event.so 00:03:18.139 LIB libspdk_fuse_dispatcher.a 00:03:18.139 SO libspdk_fuse_dispatcher.so.1.0 00:03:18.139 SYMLINK libspdk_fuse_dispatcher.so 00:03:19.077 LIB libspdk_blob.a 00:03:19.077 SO libspdk_blob.so.12.0 00:03:19.077 SYMLINK libspdk_blob.so 00:03:19.649 LIB libspdk_bdev.a 00:03:19.649 SO libspdk_bdev.so.17.0 00:03:19.649 CC lib/blobfs/blobfs.o 00:03:19.649 CC lib/blobfs/tree.o 00:03:19.649 CC lib/lvol/lvol.o 00:03:19.649 SYMLINK libspdk_bdev.so 00:03:19.908 CC lib/nvmf/ctrlr.o 00:03:19.908 CC lib/nvmf/ctrlr_discovery.o 00:03:19.908 CC lib/nvmf/ctrlr_bdev.o 00:03:19.908 CC lib/nvmf/subsystem.o 00:03:19.908 CC lib/ftl/ftl_core.o 00:03:19.908 CC lib/nbd/nbd.o 00:03:19.908 CC lib/ublk/ublk.o 00:03:19.908 CC lib/scsi/dev.o 00:03:20.167 CC lib/scsi/lun.o 00:03:20.424 CC lib/ftl/ftl_init.o 00:03:20.424 CC lib/nbd/nbd_rpc.o 00:03:20.683 CC lib/scsi/port.o 00:03:20.683 CC lib/nvmf/nvmf.o 00:03:20.683 LIB libspdk_blobfs.a 00:03:20.683 CC lib/ftl/ftl_layout.o 00:03:20.683 SO libspdk_blobfs.so.11.0 00:03:20.683 LIB libspdk_nbd.a 00:03:20.683 SO libspdk_nbd.so.7.0 00:03:20.683 SYMLINK libspdk_blobfs.so 00:03:20.683 CC lib/scsi/scsi.o 00:03:20.683 CC lib/ftl/ftl_debug.o 00:03:20.942 SYMLINK libspdk_nbd.so 00:03:20.942 CC lib/ublk/ublk_rpc.o 00:03:20.942 CC lib/nvmf/nvmf_rpc.o 00:03:20.942 LIB libspdk_lvol.a 00:03:20.942 CC lib/nvmf/transport.o 00:03:20.942 SO libspdk_lvol.so.11.0 00:03:20.942 CC lib/scsi/scsi_bdev.o 00:03:20.942 SYMLINK libspdk_lvol.so 00:03:20.942 CC lib/scsi/scsi_pr.o 00:03:20.942 LIB libspdk_ublk.a 00:03:21.199 CC lib/scsi/scsi_rpc.o 00:03:21.200 CC lib/ftl/ftl_io.o 00:03:21.200 SO libspdk_ublk.so.3.0 00:03:21.200 SYMLINK libspdk_ublk.so 00:03:21.200 CC lib/ftl/ftl_sb.o 00:03:21.200 CC lib/scsi/task.o 00:03:21.458 CC lib/ftl/ftl_l2p.o 00:03:21.458 CC lib/ftl/ftl_l2p_flat.o 00:03:21.458 CC lib/nvmf/tcp.o 00:03:21.458 CC lib/nvmf/stubs.o 00:03:21.458 CC lib/nvmf/mdns_server.o 00:03:21.458 LIB libspdk_scsi.a 00:03:21.717 SO libspdk_scsi.so.9.0 00:03:21.717 CC lib/nvmf/rdma.o 00:03:21.717 CC lib/ftl/ftl_nv_cache.o 00:03:21.717 CC lib/nvmf/auth.o 00:03:21.717 SYMLINK libspdk_scsi.so 00:03:21.717 CC lib/ftl/ftl_band.o 00:03:21.717 CC lib/ftl/ftl_band_ops.o 00:03:21.976 CC lib/ftl/ftl_writer.o 00:03:22.234 CC lib/iscsi/conn.o 00:03:22.234 CC lib/ftl/ftl_rq.o 00:03:22.234 CC lib/ftl/ftl_reloc.o 00:03:22.234 CC lib/ftl/ftl_l2p_cache.o 00:03:22.234 CC lib/vhost/vhost.o 00:03:22.234 CC lib/vhost/vhost_rpc.o 00:03:22.492 CC lib/ftl/ftl_p2l.o 00:03:22.750 CC lib/ftl/ftl_p2l_log.o 00:03:22.750 CC lib/ftl/mngt/ftl_mngt.o 00:03:23.008 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:23.008 CC lib/iscsi/init_grp.o 00:03:23.008 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:23.008 CC lib/vhost/vhost_scsi.o 00:03:23.008 CC lib/iscsi/iscsi.o 00:03:23.008 CC lib/iscsi/param.o 00:03:23.266 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:23.266 CC lib/iscsi/portal_grp.o 00:03:23.266 CC lib/vhost/vhost_blk.o 00:03:23.266 CC lib/iscsi/tgt_node.o 00:03:23.524 CC lib/vhost/rte_vhost_user.o 00:03:23.524 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:23.524 CC lib/iscsi/iscsi_subsystem.o 00:03:23.524 CC lib/iscsi/iscsi_rpc.o 00:03:23.782 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:23.782 CC lib/iscsi/task.o 00:03:23.782 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:24.041 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:24.041 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:24.041 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:24.041 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:24.041 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:24.299 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:24.299 CC lib/ftl/utils/ftl_conf.o 00:03:24.299 CC lib/ftl/utils/ftl_md.o 00:03:24.299 CC lib/ftl/utils/ftl_mempool.o 00:03:24.299 CC lib/ftl/utils/ftl_bitmap.o 00:03:24.557 CC lib/ftl/utils/ftl_property.o 00:03:24.557 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:24.557 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:24.557 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:24.557 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:24.557 LIB libspdk_nvmf.a 00:03:24.815 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:24.815 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:24.815 LIB libspdk_vhost.a 00:03:24.815 SO libspdk_nvmf.so.20.0 00:03:24.815 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:24.815 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:24.815 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:24.815 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:24.815 SO libspdk_vhost.so.8.0 00:03:25.074 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:25.074 SYMLINK libspdk_vhost.so 00:03:25.074 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:25.074 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:25.074 LIB libspdk_iscsi.a 00:03:25.074 CC lib/ftl/base/ftl_base_dev.o 00:03:25.074 CC lib/ftl/base/ftl_base_bdev.o 00:03:25.074 CC lib/ftl/ftl_trace.o 00:03:25.074 SO libspdk_iscsi.so.8.0 00:03:25.074 SYMLINK libspdk_nvmf.so 00:03:25.332 SYMLINK libspdk_iscsi.so 00:03:25.332 LIB libspdk_ftl.a 00:03:25.590 SO libspdk_ftl.so.9.0 00:03:26.156 SYMLINK libspdk_ftl.so 00:03:26.414 CC module/env_dpdk/env_dpdk_rpc.o 00:03:26.672 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:26.672 CC module/scheduler/gscheduler/gscheduler.o 00:03:26.672 CC module/fsdev/aio/fsdev_aio.o 00:03:26.672 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:26.672 CC module/sock/posix/posix.o 00:03:26.672 CC module/keyring/file/keyring.o 00:03:26.672 CC module/keyring/linux/keyring.o 00:03:26.672 CC module/blob/bdev/blob_bdev.o 00:03:26.672 CC module/accel/error/accel_error.o 00:03:26.672 LIB libspdk_env_dpdk_rpc.a 00:03:26.672 SO libspdk_env_dpdk_rpc.so.6.0 00:03:26.672 SYMLINK libspdk_env_dpdk_rpc.so 00:03:26.672 LIB libspdk_scheduler_gscheduler.a 00:03:26.672 CC module/accel/error/accel_error_rpc.o 00:03:26.672 LIB libspdk_scheduler_dpdk_governor.a 00:03:26.672 CC module/keyring/file/keyring_rpc.o 00:03:26.672 SO libspdk_scheduler_gscheduler.so.4.0 00:03:26.672 CC module/keyring/linux/keyring_rpc.o 00:03:26.672 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:26.672 LIB libspdk_scheduler_dynamic.a 00:03:26.930 SYMLINK libspdk_scheduler_gscheduler.so 00:03:26.930 SO libspdk_scheduler_dynamic.so.4.0 00:03:26.930 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:26.930 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:26.930 LIB libspdk_keyring_file.a 00:03:26.930 LIB libspdk_accel_error.a 00:03:26.930 SYMLINK libspdk_scheduler_dynamic.so 00:03:26.930 LIB libspdk_keyring_linux.a 00:03:26.930 SO libspdk_keyring_file.so.2.0 00:03:26.930 SO libspdk_accel_error.so.2.0 00:03:26.930 LIB libspdk_blob_bdev.a 00:03:26.930 SO libspdk_keyring_linux.so.1.0 00:03:26.930 SO libspdk_blob_bdev.so.12.0 00:03:26.930 SYMLINK libspdk_keyring_file.so 00:03:26.930 SYMLINK libspdk_accel_error.so 00:03:26.930 SYMLINK libspdk_keyring_linux.so 00:03:26.930 CC module/fsdev/aio/linux_aio_mgr.o 00:03:26.930 SYMLINK libspdk_blob_bdev.so 00:03:26.930 CC module/accel/dsa/accel_dsa.o 00:03:26.930 CC module/accel/dsa/accel_dsa_rpc.o 00:03:26.930 CC module/accel/ioat/accel_ioat.o 00:03:26.930 CC module/accel/ioat/accel_ioat_rpc.o 00:03:27.188 CC module/accel/iaa/accel_iaa.o 00:03:27.188 LIB libspdk_accel_ioat.a 00:03:27.447 CC module/accel/iaa/accel_iaa_rpc.o 00:03:27.447 CC module/bdev/delay/vbdev_delay.o 00:03:27.447 SO libspdk_accel_ioat.so.6.0 00:03:27.447 CC module/blobfs/bdev/blobfs_bdev.o 00:03:27.447 LIB libspdk_accel_dsa.a 00:03:27.447 CC module/bdev/error/vbdev_error.o 00:03:27.447 LIB libspdk_fsdev_aio.a 00:03:27.447 SO libspdk_accel_dsa.so.5.0 00:03:27.447 SYMLINK libspdk_accel_ioat.so 00:03:27.447 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:27.447 CC module/bdev/gpt/gpt.o 00:03:27.447 SO libspdk_fsdev_aio.so.1.0 00:03:27.447 CC module/bdev/lvol/vbdev_lvol.o 00:03:27.447 LIB libspdk_sock_posix.a 00:03:27.447 LIB libspdk_accel_iaa.a 00:03:27.447 SYMLINK libspdk_accel_dsa.so 00:03:27.447 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:27.447 SO libspdk_accel_iaa.so.3.0 00:03:27.447 SO libspdk_sock_posix.so.6.0 00:03:27.447 SYMLINK libspdk_fsdev_aio.so 00:03:27.707 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:27.707 CC module/bdev/error/vbdev_error_rpc.o 00:03:27.707 SYMLINK libspdk_accel_iaa.so 00:03:27.707 SYMLINK libspdk_sock_posix.so 00:03:27.707 LIB libspdk_blobfs_bdev.a 00:03:27.707 CC module/bdev/gpt/vbdev_gpt.o 00:03:27.707 SO libspdk_blobfs_bdev.so.6.0 00:03:27.707 SYMLINK libspdk_blobfs_bdev.so 00:03:27.707 LIB libspdk_bdev_error.a 00:03:27.707 CC module/bdev/malloc/bdev_malloc.o 00:03:27.707 CC module/bdev/null/bdev_null.o 00:03:27.707 SO libspdk_bdev_error.so.6.0 00:03:27.966 LIB libspdk_bdev_delay.a 00:03:27.966 SO libspdk_bdev_delay.so.6.0 00:03:27.966 CC module/bdev/nvme/bdev_nvme.o 00:03:27.966 SYMLINK libspdk_bdev_error.so 00:03:27.966 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:27.966 CC module/bdev/passthru/vbdev_passthru.o 00:03:27.966 SYMLINK libspdk_bdev_delay.so 00:03:27.966 CC module/bdev/raid/bdev_raid.o 00:03:27.966 CC module/bdev/raid/bdev_raid_rpc.o 00:03:27.966 LIB libspdk_bdev_gpt.a 00:03:27.966 CC module/bdev/raid/bdev_raid_sb.o 00:03:27.966 SO libspdk_bdev_gpt.so.6.0 00:03:28.225 SYMLINK libspdk_bdev_gpt.so 00:03:28.225 LIB libspdk_bdev_lvol.a 00:03:28.225 CC module/bdev/null/bdev_null_rpc.o 00:03:28.225 SO libspdk_bdev_lvol.so.6.0 00:03:28.225 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:28.225 SYMLINK libspdk_bdev_lvol.so 00:03:28.225 LIB libspdk_bdev_malloc.a 00:03:28.225 CC module/bdev/nvme/nvme_rpc.o 00:03:28.225 SO libspdk_bdev_malloc.so.6.0 00:03:28.225 CC module/bdev/split/vbdev_split.o 00:03:28.225 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:28.225 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:28.484 LIB libspdk_bdev_null.a 00:03:28.484 CC module/bdev/raid/raid0.o 00:03:28.484 SYMLINK libspdk_bdev_malloc.so 00:03:28.484 SO libspdk_bdev_null.so.6.0 00:03:28.484 SYMLINK libspdk_bdev_null.so 00:03:28.484 LIB libspdk_bdev_passthru.a 00:03:28.484 SO libspdk_bdev_passthru.so.6.0 00:03:28.484 CC module/bdev/aio/bdev_aio.o 00:03:28.743 CC module/bdev/split/vbdev_split_rpc.o 00:03:28.743 SYMLINK libspdk_bdev_passthru.so 00:03:28.743 CC module/bdev/aio/bdev_aio_rpc.o 00:03:28.743 CC module/bdev/ftl/bdev_ftl.o 00:03:28.743 CC module/bdev/raid/raid1.o 00:03:28.743 CC module/bdev/iscsi/bdev_iscsi.o 00:03:28.743 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:28.743 LIB libspdk_bdev_split.a 00:03:28.743 SO libspdk_bdev_split.so.6.0 00:03:28.743 CC module/bdev/raid/concat.o 00:03:29.001 SYMLINK libspdk_bdev_split.so 00:03:29.001 CC module/bdev/raid/raid5f.o 00:03:29.001 LIB libspdk_bdev_aio.a 00:03:29.001 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:29.001 LIB libspdk_bdev_zone_block.a 00:03:29.001 SO libspdk_bdev_aio.so.6.0 00:03:29.001 SO libspdk_bdev_zone_block.so.6.0 00:03:29.001 SYMLINK libspdk_bdev_aio.so 00:03:29.001 CC module/bdev/nvme/bdev_mdns_client.o 00:03:29.001 CC module/bdev/nvme/vbdev_opal.o 00:03:29.001 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:29.001 SYMLINK libspdk_bdev_zone_block.so 00:03:29.001 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:29.260 LIB libspdk_bdev_ftl.a 00:03:29.260 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:29.260 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:29.260 SO libspdk_bdev_ftl.so.6.0 00:03:29.260 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:29.260 LIB libspdk_bdev_iscsi.a 00:03:29.260 SYMLINK libspdk_bdev_ftl.so 00:03:29.260 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:29.260 SO libspdk_bdev_iscsi.so.6.0 00:03:29.519 SYMLINK libspdk_bdev_iscsi.so 00:03:29.519 LIB libspdk_bdev_raid.a 00:03:29.778 SO libspdk_bdev_raid.so.6.0 00:03:29.778 SYMLINK libspdk_bdev_raid.so 00:03:30.038 LIB libspdk_bdev_virtio.a 00:03:30.038 SO libspdk_bdev_virtio.so.6.0 00:03:30.038 SYMLINK libspdk_bdev_virtio.so 00:03:31.948 LIB libspdk_bdev_nvme.a 00:03:31.948 SO libspdk_bdev_nvme.so.7.1 00:03:31.948 SYMLINK libspdk_bdev_nvme.so 00:03:32.527 CC module/event/subsystems/sock/sock.o 00:03:32.527 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.527 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.527 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.527 CC module/event/subsystems/keyring/keyring.o 00:03:32.527 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.527 CC module/event/subsystems/vmd/vmd.o 00:03:32.527 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.527 CC module/event/subsystems/fsdev/fsdev.o 00:03:32.787 LIB libspdk_event_scheduler.a 00:03:32.787 LIB libspdk_event_sock.a 00:03:32.787 LIB libspdk_event_vhost_blk.a 00:03:32.787 LIB libspdk_event_iobuf.a 00:03:32.787 LIB libspdk_event_fsdev.a 00:03:32.787 LIB libspdk_event_vmd.a 00:03:32.787 SO libspdk_event_sock.so.5.0 00:03:32.787 SO libspdk_event_scheduler.so.4.0 00:03:32.787 LIB libspdk_event_keyring.a 00:03:32.787 SO libspdk_event_vhost_blk.so.3.0 00:03:32.787 SO libspdk_event_fsdev.so.1.0 00:03:32.787 SO libspdk_event_iobuf.so.3.0 00:03:32.787 SO libspdk_event_vmd.so.6.0 00:03:32.787 SO libspdk_event_keyring.so.1.0 00:03:32.787 SYMLINK libspdk_event_sock.so 00:03:32.787 SYMLINK libspdk_event_scheduler.so 00:03:32.787 SYMLINK libspdk_event_fsdev.so 00:03:32.787 SYMLINK libspdk_event_vhost_blk.so 00:03:32.787 SYMLINK libspdk_event_keyring.so 00:03:32.787 SYMLINK libspdk_event_iobuf.so 00:03:32.787 SYMLINK libspdk_event_vmd.so 00:03:33.357 CC module/event/subsystems/accel/accel.o 00:03:33.616 LIB libspdk_event_accel.a 00:03:33.616 SO libspdk_event_accel.so.6.0 00:03:33.616 SYMLINK libspdk_event_accel.so 00:03:34.182 CC module/event/subsystems/bdev/bdev.o 00:03:34.442 LIB libspdk_event_bdev.a 00:03:34.442 SO libspdk_event_bdev.so.6.0 00:03:34.442 SYMLINK libspdk_event_bdev.so 00:03:34.702 CC module/event/subsystems/ublk/ublk.o 00:03:34.702 CC module/event/subsystems/nbd/nbd.o 00:03:34.702 CC module/event/subsystems/scsi/scsi.o 00:03:34.702 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.702 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.962 LIB libspdk_event_ublk.a 00:03:34.962 LIB libspdk_event_nbd.a 00:03:34.962 SO libspdk_event_ublk.so.3.0 00:03:34.962 SO libspdk_event_nbd.so.6.0 00:03:34.962 LIB libspdk_event_scsi.a 00:03:34.962 SYMLINK libspdk_event_ublk.so 00:03:34.962 SO libspdk_event_scsi.so.6.0 00:03:34.962 SYMLINK libspdk_event_nbd.so 00:03:35.223 LIB libspdk_event_nvmf.a 00:03:35.223 SYMLINK libspdk_event_scsi.so 00:03:35.223 SO libspdk_event_nvmf.so.6.0 00:03:35.223 SYMLINK libspdk_event_nvmf.so 00:03:35.491 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:35.491 CC module/event/subsystems/iscsi/iscsi.o 00:03:35.754 LIB libspdk_event_vhost_scsi.a 00:03:35.754 LIB libspdk_event_iscsi.a 00:03:35.754 SO libspdk_event_vhost_scsi.so.3.0 00:03:35.754 SO libspdk_event_iscsi.so.6.0 00:03:35.754 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.754 SYMLINK libspdk_event_iscsi.so 00:03:36.014 SO libspdk.so.6.0 00:03:36.014 SYMLINK libspdk.so 00:03:36.583 CC test/rpc_client/rpc_client_test.o 00:03:36.583 TEST_HEADER include/spdk/accel.h 00:03:36.583 TEST_HEADER include/spdk/accel_module.h 00:03:36.583 TEST_HEADER include/spdk/assert.h 00:03:36.583 TEST_HEADER include/spdk/barrier.h 00:03:36.583 TEST_HEADER include/spdk/base64.h 00:03:36.583 CC app/trace_record/trace_record.o 00:03:36.583 TEST_HEADER include/spdk/bdev.h 00:03:36.583 TEST_HEADER include/spdk/bdev_module.h 00:03:36.583 TEST_HEADER include/spdk/bdev_zone.h 00:03:36.583 TEST_HEADER include/spdk/bit_array.h 00:03:36.583 TEST_HEADER include/spdk/bit_pool.h 00:03:36.583 TEST_HEADER include/spdk/blob_bdev.h 00:03:36.583 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:36.583 CXX app/trace/trace.o 00:03:36.583 TEST_HEADER include/spdk/blobfs.h 00:03:36.583 TEST_HEADER include/spdk/blob.h 00:03:36.583 TEST_HEADER include/spdk/conf.h 00:03:36.583 TEST_HEADER include/spdk/config.h 00:03:36.583 TEST_HEADER include/spdk/cpuset.h 00:03:36.583 TEST_HEADER include/spdk/crc16.h 00:03:36.583 TEST_HEADER include/spdk/crc32.h 00:03:36.583 TEST_HEADER include/spdk/crc64.h 00:03:36.583 TEST_HEADER include/spdk/dif.h 00:03:36.583 TEST_HEADER include/spdk/dma.h 00:03:36.583 TEST_HEADER include/spdk/endian.h 00:03:36.583 TEST_HEADER include/spdk/env_dpdk.h 00:03:36.583 TEST_HEADER include/spdk/env.h 00:03:36.583 TEST_HEADER include/spdk/event.h 00:03:36.583 CC app/nvmf_tgt/nvmf_main.o 00:03:36.584 TEST_HEADER include/spdk/fd_group.h 00:03:36.584 TEST_HEADER include/spdk/fd.h 00:03:36.584 TEST_HEADER include/spdk/file.h 00:03:36.584 TEST_HEADER include/spdk/fsdev.h 00:03:36.584 TEST_HEADER include/spdk/fsdev_module.h 00:03:36.584 TEST_HEADER include/spdk/ftl.h 00:03:36.584 TEST_HEADER include/spdk/gpt_spec.h 00:03:36.584 TEST_HEADER include/spdk/hexlify.h 00:03:36.584 TEST_HEADER include/spdk/histogram_data.h 00:03:36.584 TEST_HEADER include/spdk/idxd.h 00:03:36.584 TEST_HEADER include/spdk/idxd_spec.h 00:03:36.584 TEST_HEADER include/spdk/init.h 00:03:36.584 TEST_HEADER include/spdk/ioat.h 00:03:36.584 TEST_HEADER include/spdk/ioat_spec.h 00:03:36.584 TEST_HEADER include/spdk/iscsi_spec.h 00:03:36.584 TEST_HEADER include/spdk/json.h 00:03:36.584 TEST_HEADER include/spdk/jsonrpc.h 00:03:36.584 TEST_HEADER include/spdk/keyring.h 00:03:36.584 TEST_HEADER include/spdk/keyring_module.h 00:03:36.584 TEST_HEADER include/spdk/likely.h 00:03:36.584 TEST_HEADER include/spdk/log.h 00:03:36.584 TEST_HEADER include/spdk/lvol.h 00:03:36.584 TEST_HEADER include/spdk/md5.h 00:03:36.584 CC test/thread/poller_perf/poller_perf.o 00:03:36.584 TEST_HEADER include/spdk/memory.h 00:03:36.584 TEST_HEADER include/spdk/mmio.h 00:03:36.584 TEST_HEADER include/spdk/nbd.h 00:03:36.584 TEST_HEADER include/spdk/net.h 00:03:36.584 TEST_HEADER include/spdk/notify.h 00:03:36.584 TEST_HEADER include/spdk/nvme.h 00:03:36.584 CC examples/util/zipf/zipf.o 00:03:36.584 TEST_HEADER include/spdk/nvme_intel.h 00:03:36.584 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:36.584 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:36.584 TEST_HEADER include/spdk/nvme_spec.h 00:03:36.584 TEST_HEADER include/spdk/nvme_zns.h 00:03:36.584 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:36.584 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:36.584 TEST_HEADER include/spdk/nvmf.h 00:03:36.584 TEST_HEADER include/spdk/nvmf_spec.h 00:03:36.584 TEST_HEADER include/spdk/nvmf_transport.h 00:03:36.584 TEST_HEADER include/spdk/opal.h 00:03:36.584 TEST_HEADER include/spdk/opal_spec.h 00:03:36.584 TEST_HEADER include/spdk/pci_ids.h 00:03:36.584 CC test/app/bdev_svc/bdev_svc.o 00:03:36.584 TEST_HEADER include/spdk/pipe.h 00:03:36.584 TEST_HEADER include/spdk/queue.h 00:03:36.584 CC test/dma/test_dma/test_dma.o 00:03:36.584 TEST_HEADER include/spdk/reduce.h 00:03:36.584 TEST_HEADER include/spdk/rpc.h 00:03:36.584 TEST_HEADER include/spdk/scheduler.h 00:03:36.584 TEST_HEADER include/spdk/scsi.h 00:03:36.584 TEST_HEADER include/spdk/scsi_spec.h 00:03:36.584 TEST_HEADER include/spdk/sock.h 00:03:36.584 TEST_HEADER include/spdk/stdinc.h 00:03:36.584 TEST_HEADER include/spdk/string.h 00:03:36.584 TEST_HEADER include/spdk/thread.h 00:03:36.584 TEST_HEADER include/spdk/trace.h 00:03:36.584 TEST_HEADER include/spdk/trace_parser.h 00:03:36.584 TEST_HEADER include/spdk/tree.h 00:03:36.584 TEST_HEADER include/spdk/ublk.h 00:03:36.584 TEST_HEADER include/spdk/util.h 00:03:36.584 TEST_HEADER include/spdk/uuid.h 00:03:36.584 TEST_HEADER include/spdk/version.h 00:03:36.584 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:36.584 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:36.584 TEST_HEADER include/spdk/vhost.h 00:03:36.584 TEST_HEADER include/spdk/vmd.h 00:03:36.584 TEST_HEADER include/spdk/xor.h 00:03:36.584 TEST_HEADER include/spdk/zipf.h 00:03:36.584 CXX test/cpp_headers/accel.o 00:03:36.584 CC test/env/mem_callbacks/mem_callbacks.o 00:03:36.843 LINK rpc_client_test 00:03:36.843 LINK nvmf_tgt 00:03:36.843 LINK poller_perf 00:03:36.843 LINK zipf 00:03:36.843 LINK spdk_trace_record 00:03:36.843 LINK bdev_svc 00:03:36.843 CXX test/cpp_headers/accel_module.o 00:03:36.843 LINK spdk_trace 00:03:36.843 CXX test/cpp_headers/assert.o 00:03:37.102 CC app/iscsi_tgt/iscsi_tgt.o 00:03:37.102 CC test/env/vtophys/vtophys.o 00:03:37.102 CXX test/cpp_headers/barrier.o 00:03:37.102 CC examples/ioat/perf/perf.o 00:03:37.102 CC test/app/histogram_perf/histogram_perf.o 00:03:37.102 LINK test_dma 00:03:37.102 LINK vtophys 00:03:37.102 CC test/event/event_perf/event_perf.o 00:03:37.102 CC examples/vmd/lsvmd/lsvmd.o 00:03:37.360 CXX test/cpp_headers/base64.o 00:03:37.360 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:37.360 LINK iscsi_tgt 00:03:37.360 LINK mem_callbacks 00:03:37.360 LINK histogram_perf 00:03:37.360 LINK ioat_perf 00:03:37.360 LINK lsvmd 00:03:37.360 LINK event_perf 00:03:37.360 CXX test/cpp_headers/bdev.o 00:03:37.619 CC test/event/reactor/reactor.o 00:03:37.619 CC test/app/jsoncat/jsoncat.o 00:03:37.619 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:37.619 CXX test/cpp_headers/bdev_module.o 00:03:37.619 CC examples/ioat/verify/verify.o 00:03:37.619 CC test/env/memory/memory_ut.o 00:03:37.619 CC examples/vmd/led/led.o 00:03:37.619 LINK jsoncat 00:03:37.619 LINK reactor 00:03:37.619 CC app/spdk_tgt/spdk_tgt.o 00:03:37.619 LINK env_dpdk_post_init 00:03:37.619 LINK nvme_fuzz 00:03:37.619 CC test/accel/dif/dif.o 00:03:37.878 CXX test/cpp_headers/bdev_zone.o 00:03:37.878 LINK led 00:03:37.878 LINK verify 00:03:37.878 LINK spdk_tgt 00:03:37.878 CC test/event/reactor_perf/reactor_perf.o 00:03:37.878 CXX test/cpp_headers/bit_array.o 00:03:38.137 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.137 CC test/blobfs/mkfs/mkfs.o 00:03:38.137 CC test/event/app_repeat/app_repeat.o 00:03:38.137 LINK reactor_perf 00:03:38.137 CC test/event/scheduler/scheduler.o 00:03:38.137 CXX test/cpp_headers/bit_pool.o 00:03:38.137 CC examples/idxd/perf/perf.o 00:03:38.137 CC app/spdk_lspci/spdk_lspci.o 00:03:38.137 LINK mkfs 00:03:38.137 LINK app_repeat 00:03:38.396 CXX test/cpp_headers/blob_bdev.o 00:03:38.396 LINK spdk_lspci 00:03:38.396 LINK scheduler 00:03:38.396 CC test/lvol/esnap/esnap.o 00:03:38.396 CXX test/cpp_headers/blobfs_bdev.o 00:03:38.396 LINK dif 00:03:38.396 CC app/spdk_nvme_perf/perf.o 00:03:38.396 LINK idxd_perf 00:03:38.655 CC test/nvme/aer/aer.o 00:03:38.655 CC app/spdk_nvme_identify/identify.o 00:03:38.655 CC app/spdk_nvme_discover/discovery_aer.o 00:03:38.655 CXX test/cpp_headers/blobfs.o 00:03:38.655 LINK memory_ut 00:03:38.655 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:38.914 CXX test/cpp_headers/blob.o 00:03:38.914 LINK spdk_nvme_discover 00:03:38.914 LINK aer 00:03:38.914 CC examples/thread/thread/thread_ex.o 00:03:38.914 LINK interrupt_tgt 00:03:38.914 CXX test/cpp_headers/conf.o 00:03:38.914 CC test/env/pci/pci_ut.o 00:03:39.173 CC test/nvme/reset/reset.o 00:03:39.173 LINK thread 00:03:39.173 CXX test/cpp_headers/config.o 00:03:39.173 CXX test/cpp_headers/cpuset.o 00:03:39.173 CC test/nvme/sgl/sgl.o 00:03:39.173 CC test/bdev/bdevio/bdevio.o 00:03:39.432 CXX test/cpp_headers/crc16.o 00:03:39.432 LINK spdk_nvme_perf 00:03:39.432 LINK reset 00:03:39.432 LINK pci_ut 00:03:39.432 CXX test/cpp_headers/crc32.o 00:03:39.432 LINK sgl 00:03:39.432 CC examples/sock/hello_world/hello_sock.o 00:03:39.432 LINK spdk_nvme_identify 00:03:39.691 CXX test/cpp_headers/crc64.o 00:03:39.691 LINK bdevio 00:03:39.691 CXX test/cpp_headers/dif.o 00:03:39.691 CC test/nvme/e2edp/nvme_dp.o 00:03:39.691 CXX test/cpp_headers/dma.o 00:03:39.691 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:39.950 LINK iscsi_fuzz 00:03:39.950 LINK hello_sock 00:03:39.950 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:39.950 CC app/spdk_top/spdk_top.o 00:03:39.950 CXX test/cpp_headers/endian.o 00:03:39.950 CC test/nvme/overhead/overhead.o 00:03:39.950 CC test/app/stub/stub.o 00:03:39.950 CC test/nvme/err_injection/err_injection.o 00:03:39.950 LINK nvme_dp 00:03:40.233 CXX test/cpp_headers/env_dpdk.o 00:03:40.233 LINK stub 00:03:40.233 CC test/nvme/startup/startup.o 00:03:40.233 CXX test/cpp_headers/env.o 00:03:40.233 LINK err_injection 00:03:40.233 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:40.233 CC test/nvme/reserve/reserve.o 00:03:40.233 LINK overhead 00:03:40.233 LINK vhost_fuzz 00:03:40.516 LINK startup 00:03:40.516 CXX test/cpp_headers/event.o 00:03:40.516 LINK reserve 00:03:40.516 CC examples/accel/perf/accel_perf.o 00:03:40.516 CC test/nvme/simple_copy/simple_copy.o 00:03:40.516 LINK hello_fsdev 00:03:40.516 CXX test/cpp_headers/fd_group.o 00:03:40.776 CC test/nvme/connect_stress/connect_stress.o 00:03:40.776 CC examples/blob/hello_world/hello_blob.o 00:03:40.776 CC examples/nvme/hello_world/hello_world.o 00:03:40.776 CXX test/cpp_headers/fd.o 00:03:40.776 CC test/nvme/boot_partition/boot_partition.o 00:03:40.776 LINK simple_copy 00:03:40.776 LINK connect_stress 00:03:40.776 CC test/nvme/compliance/nvme_compliance.o 00:03:40.776 CXX test/cpp_headers/file.o 00:03:41.036 LINK spdk_top 00:03:41.036 LINK boot_partition 00:03:41.036 LINK hello_blob 00:03:41.036 LINK hello_world 00:03:41.036 CXX test/cpp_headers/fsdev.o 00:03:41.036 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.036 LINK accel_perf 00:03:41.296 CC examples/blob/cli/blobcli.o 00:03:41.296 CC app/vhost/vhost.o 00:03:41.296 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:41.296 CC test/nvme/fdp/fdp.o 00:03:41.296 LINK nvme_compliance 00:03:41.296 CXX test/cpp_headers/fsdev_module.o 00:03:41.296 CC examples/nvme/reconnect/reconnect.o 00:03:41.296 LINK fused_ordering 00:03:41.296 CXX test/cpp_headers/ftl.o 00:03:41.297 LINK vhost 00:03:41.297 LINK doorbell_aers 00:03:41.297 CC app/spdk_dd/spdk_dd.o 00:03:41.556 CC test/nvme/cuse/cuse.o 00:03:41.556 CXX test/cpp_headers/gpt_spec.o 00:03:41.556 LINK fdp 00:03:41.556 CXX test/cpp_headers/hexlify.o 00:03:41.556 LINK reconnect 00:03:41.817 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.817 LINK blobcli 00:03:41.817 CC app/fio/nvme/fio_plugin.o 00:03:41.817 CC examples/bdev/hello_world/hello_bdev.o 00:03:41.817 LINK spdk_dd 00:03:41.817 CXX test/cpp_headers/histogram_data.o 00:03:41.817 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.817 CC examples/nvme/arbitration/arbitration.o 00:03:42.076 CXX test/cpp_headers/idxd.o 00:03:42.076 CXX test/cpp_headers/idxd_spec.o 00:03:42.076 CC examples/nvme/hotplug/hotplug.o 00:03:42.076 LINK hello_bdev 00:03:42.076 CXX test/cpp_headers/init.o 00:03:42.076 CXX test/cpp_headers/ioat.o 00:03:42.336 CXX test/cpp_headers/ioat_spec.o 00:03:42.336 LINK hotplug 00:03:42.336 LINK nvme_manage 00:03:42.336 LINK arbitration 00:03:42.336 CXX test/cpp_headers/iscsi_spec.o 00:03:42.336 LINK spdk_nvme 00:03:42.336 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:42.336 CXX test/cpp_headers/json.o 00:03:42.596 CXX test/cpp_headers/jsonrpc.o 00:03:42.596 CXX test/cpp_headers/keyring.o 00:03:42.596 CC examples/nvme/abort/abort.o 00:03:42.596 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.596 CC app/fio/bdev/fio_plugin.o 00:03:42.596 CXX test/cpp_headers/keyring_module.o 00:03:42.596 LINK cmb_copy 00:03:42.596 CXX test/cpp_headers/likely.o 00:03:42.596 CXX test/cpp_headers/log.o 00:03:42.596 LINK pmr_persistence 00:03:42.855 CXX test/cpp_headers/lvol.o 00:03:42.855 CXX test/cpp_headers/md5.o 00:03:42.855 CXX test/cpp_headers/memory.o 00:03:42.855 LINK bdevperf 00:03:42.855 CXX test/cpp_headers/mmio.o 00:03:42.855 LINK cuse 00:03:42.855 CXX test/cpp_headers/nbd.o 00:03:42.855 CXX test/cpp_headers/net.o 00:03:42.855 CXX test/cpp_headers/notify.o 00:03:42.855 LINK abort 00:03:42.855 CXX test/cpp_headers/nvme.o 00:03:43.114 CXX test/cpp_headers/nvme_intel.o 00:03:43.114 CXX test/cpp_headers/nvme_ocssd.o 00:03:43.114 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:43.114 CXX test/cpp_headers/nvme_spec.o 00:03:43.114 CXX test/cpp_headers/nvme_zns.o 00:03:43.114 CXX test/cpp_headers/nvmf_cmd.o 00:03:43.114 LINK spdk_bdev 00:03:43.114 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:43.114 CXX test/cpp_headers/nvmf.o 00:03:43.114 CXX test/cpp_headers/nvmf_spec.o 00:03:43.373 CXX test/cpp_headers/nvmf_transport.o 00:03:43.373 CXX test/cpp_headers/opal.o 00:03:43.373 CXX test/cpp_headers/opal_spec.o 00:03:43.373 CXX test/cpp_headers/pci_ids.o 00:03:43.373 CXX test/cpp_headers/pipe.o 00:03:43.373 CC examples/nvmf/nvmf/nvmf.o 00:03:43.373 CXX test/cpp_headers/queue.o 00:03:43.373 CXX test/cpp_headers/reduce.o 00:03:43.373 CXX test/cpp_headers/rpc.o 00:03:43.373 CXX test/cpp_headers/scheduler.o 00:03:43.373 CXX test/cpp_headers/scsi.o 00:03:43.373 CXX test/cpp_headers/scsi_spec.o 00:03:43.373 CXX test/cpp_headers/sock.o 00:03:43.373 CXX test/cpp_headers/stdinc.o 00:03:43.373 CXX test/cpp_headers/string.o 00:03:43.632 CXX test/cpp_headers/thread.o 00:03:43.632 CXX test/cpp_headers/trace.o 00:03:43.632 CXX test/cpp_headers/trace_parser.o 00:03:43.632 CXX test/cpp_headers/tree.o 00:03:43.632 CXX test/cpp_headers/ublk.o 00:03:43.632 CXX test/cpp_headers/util.o 00:03:43.632 CXX test/cpp_headers/uuid.o 00:03:43.632 CXX test/cpp_headers/version.o 00:03:43.632 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.632 LINK nvmf 00:03:43.632 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.632 CXX test/cpp_headers/vmd.o 00:03:43.632 CXX test/cpp_headers/vhost.o 00:03:43.891 CXX test/cpp_headers/xor.o 00:03:43.891 CXX test/cpp_headers/zipf.o 00:03:44.830 LINK esnap 00:03:45.399 00:03:45.399 real 1m30.278s 00:03:45.399 user 7m52.269s 00:03:45.399 sys 1m47.143s 00:03:45.399 12:29:44 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:45.399 12:29:44 make -- common/autotest_common.sh@10 -- $ set +x 00:03:45.399 ************************************ 00:03:45.399 END TEST make 00:03:45.399 ************************************ 00:03:45.399 12:29:45 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:45.399 12:29:45 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:45.399 12:29:45 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:45.399 12:29:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.399 12:29:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:45.400 12:29:45 -- pm/common@44 -- $ pid=5471 00:03:45.400 12:29:45 -- pm/common@50 -- $ kill -TERM 5471 00:03:45.400 12:29:45 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.400 12:29:45 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:45.400 12:29:45 -- pm/common@44 -- $ pid=5472 00:03:45.400 12:29:45 -- pm/common@50 -- $ kill -TERM 5472 00:03:45.400 12:29:45 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:45.400 12:29:45 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:45.660 12:29:45 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:45.660 12:29:45 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:45.660 12:29:45 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:45.660 12:29:45 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:45.660 12:29:45 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:45.660 12:29:45 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:45.660 12:29:45 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:45.660 12:29:45 -- scripts/common.sh@336 -- # IFS=.-: 00:03:45.660 12:29:45 -- scripts/common.sh@336 -- # read -ra ver1 00:03:45.660 12:29:45 -- scripts/common.sh@337 -- # IFS=.-: 00:03:45.660 12:29:45 -- scripts/common.sh@337 -- # read -ra ver2 00:03:45.660 12:29:45 -- scripts/common.sh@338 -- # local 'op=<' 00:03:45.660 12:29:45 -- scripts/common.sh@340 -- # ver1_l=2 00:03:45.660 12:29:45 -- scripts/common.sh@341 -- # ver2_l=1 00:03:45.660 12:29:45 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:45.660 12:29:45 -- scripts/common.sh@344 -- # case "$op" in 00:03:45.660 12:29:45 -- scripts/common.sh@345 -- # : 1 00:03:45.660 12:29:45 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:45.660 12:29:45 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:45.660 12:29:45 -- scripts/common.sh@365 -- # decimal 1 00:03:45.660 12:29:45 -- scripts/common.sh@353 -- # local d=1 00:03:45.660 12:29:45 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:45.660 12:29:45 -- scripts/common.sh@355 -- # echo 1 00:03:45.660 12:29:45 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:45.660 12:29:45 -- scripts/common.sh@366 -- # decimal 2 00:03:45.660 12:29:45 -- scripts/common.sh@353 -- # local d=2 00:03:45.660 12:29:45 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:45.660 12:29:45 -- scripts/common.sh@355 -- # echo 2 00:03:45.660 12:29:45 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:45.660 12:29:45 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:45.660 12:29:45 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:45.660 12:29:45 -- scripts/common.sh@368 -- # return 0 00:03:45.660 12:29:45 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:45.660 12:29:45 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.660 --rc genhtml_branch_coverage=1 00:03:45.660 --rc genhtml_function_coverage=1 00:03:45.660 --rc genhtml_legend=1 00:03:45.660 --rc geninfo_all_blocks=1 00:03:45.660 --rc geninfo_unexecuted_blocks=1 00:03:45.660 00:03:45.660 ' 00:03:45.660 12:29:45 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.660 --rc genhtml_branch_coverage=1 00:03:45.660 --rc genhtml_function_coverage=1 00:03:45.660 --rc genhtml_legend=1 00:03:45.660 --rc geninfo_all_blocks=1 00:03:45.660 --rc geninfo_unexecuted_blocks=1 00:03:45.660 00:03:45.660 ' 00:03:45.660 12:29:45 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.660 --rc genhtml_branch_coverage=1 00:03:45.660 --rc genhtml_function_coverage=1 00:03:45.660 --rc genhtml_legend=1 00:03:45.660 --rc geninfo_all_blocks=1 00:03:45.660 --rc geninfo_unexecuted_blocks=1 00:03:45.660 00:03:45.660 ' 00:03:45.660 12:29:45 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:45.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:45.660 --rc genhtml_branch_coverage=1 00:03:45.660 --rc genhtml_function_coverage=1 00:03:45.660 --rc genhtml_legend=1 00:03:45.660 --rc geninfo_all_blocks=1 00:03:45.660 --rc geninfo_unexecuted_blocks=1 00:03:45.660 00:03:45.660 ' 00:03:45.660 12:29:45 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:45.660 12:29:45 -- nvmf/common.sh@7 -- # uname -s 00:03:45.660 12:29:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:45.660 12:29:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:45.660 12:29:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:45.660 12:29:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:45.660 12:29:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:45.660 12:29:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:45.660 12:29:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:45.660 12:29:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:45.660 12:29:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:45.660 12:29:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:45.660 12:29:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:acf39263-853c-4270-82f2-9ace538f8911 00:03:45.660 12:29:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=acf39263-853c-4270-82f2-9ace538f8911 00:03:45.660 12:29:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:45.660 12:29:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:45.660 12:29:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:45.660 12:29:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:45.660 12:29:45 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:45.660 12:29:45 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:45.660 12:29:45 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:45.660 12:29:45 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:45.660 12:29:45 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:45.660 12:29:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.660 12:29:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.660 12:29:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.660 12:29:45 -- paths/export.sh@5 -- # export PATH 00:03:45.660 12:29:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:45.660 12:29:45 -- nvmf/common.sh@51 -- # : 0 00:03:45.660 12:29:45 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:45.660 12:29:45 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:45.660 12:29:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:45.660 12:29:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:45.660 12:29:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:45.660 12:29:45 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:45.660 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:45.660 12:29:45 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:45.660 12:29:45 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:45.660 12:29:45 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:45.660 12:29:45 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:45.660 12:29:45 -- spdk/autotest.sh@32 -- # uname -s 00:03:45.660 12:29:45 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:45.660 12:29:45 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:45.660 12:29:45 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.660 12:29:45 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:45.660 12:29:45 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:45.660 12:29:45 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:45.660 12:29:45 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:45.660 12:29:45 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:45.660 12:29:45 -- spdk/autotest.sh@48 -- # udevadm_pid=56272 00:03:45.660 12:29:45 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:45.660 12:29:45 -- pm/common@17 -- # local monitor 00:03:45.920 12:29:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.920 12:29:45 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:45.920 12:29:45 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:45.920 12:29:45 -- pm/common@25 -- # sleep 1 00:03:45.920 12:29:45 -- pm/common@21 -- # date +%s 00:03:45.920 12:29:45 -- pm/common@21 -- # date +%s 00:03:45.920 12:29:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734179385 00:03:45.920 12:29:45 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734179385 00:03:45.920 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734179385_collect-vmstat.pm.log 00:03:45.920 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734179385_collect-cpu-load.pm.log 00:03:46.860 12:29:46 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:46.860 12:29:46 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:46.860 12:29:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:46.860 12:29:46 -- common/autotest_common.sh@10 -- # set +x 00:03:46.860 12:29:46 -- spdk/autotest.sh@59 -- # create_test_list 00:03:46.860 12:29:46 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:46.860 12:29:46 -- common/autotest_common.sh@10 -- # set +x 00:03:46.860 12:29:46 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:46.860 12:29:46 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:46.860 12:29:46 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:46.860 12:29:46 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:46.860 12:29:46 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:46.860 12:29:46 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:46.860 12:29:46 -- common/autotest_common.sh@1457 -- # uname 00:03:46.860 12:29:46 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:46.860 12:29:46 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:46.860 12:29:46 -- common/autotest_common.sh@1477 -- # uname 00:03:46.860 12:29:46 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:46.860 12:29:46 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:46.860 12:29:46 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:46.860 lcov: LCOV version 1.15 00:03:46.860 12:29:46 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:04.978 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.978 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:19.893 12:30:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:19.893 12:30:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.893 12:30:18 -- common/autotest_common.sh@10 -- # set +x 00:04:19.893 12:30:18 -- spdk/autotest.sh@78 -- # rm -f 00:04:19.893 12:30:18 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.893 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.893 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:19.893 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:19.893 12:30:18 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:19.893 12:30:18 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:19.893 12:30:18 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:19.893 12:30:18 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:19.893 12:30:18 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:19.893 12:30:18 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:19.893 12:30:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:19.893 12:30:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:19.893 12:30:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:19.893 12:30:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:19.893 12:30:18 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:19.893 12:30:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.893 12:30:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.893 12:30:18 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:19.893 12:30:18 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:19.893 12:30:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:19.893 12:30:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:19.893 12:30:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:19.893 12:30:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:19.893 12:30:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.893 12:30:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:19.893 12:30:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:19.893 12:30:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:19.893 12:30:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:19.893 12:30:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.893 12:30:18 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:19.893 12:30:18 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:19.893 12:30:18 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:19.893 12:30:18 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:19.893 12:30:18 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:19.893 12:30:18 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:19.893 12:30:18 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.893 12:30:18 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.893 12:30:18 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:19.893 12:30:18 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:19.893 12:30:18 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.893 No valid GPT data, bailing 00:04:19.893 12:30:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.893 12:30:19 -- scripts/common.sh@394 -- # pt= 00:04:19.893 12:30:19 -- scripts/common.sh@395 -- # return 1 00:04:19.893 12:30:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.893 1+0 records in 00:04:19.893 1+0 records out 00:04:19.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00710704 s, 148 MB/s 00:04:19.893 12:30:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.893 12:30:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.893 12:30:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:19.893 12:30:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:19.893 12:30:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:19.893 No valid GPT data, bailing 00:04:19.893 12:30:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:19.893 12:30:19 -- scripts/common.sh@394 -- # pt= 00:04:19.893 12:30:19 -- scripts/common.sh@395 -- # return 1 00:04:19.893 12:30:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:19.893 1+0 records in 00:04:19.893 1+0 records out 00:04:19.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633275 s, 166 MB/s 00:04:19.893 12:30:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.893 12:30:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.893 12:30:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:19.893 12:30:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:19.893 12:30:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:19.893 No valid GPT data, bailing 00:04:19.893 12:30:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:19.893 12:30:19 -- scripts/common.sh@394 -- # pt= 00:04:19.893 12:30:19 -- scripts/common.sh@395 -- # return 1 00:04:19.893 12:30:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:19.893 1+0 records in 00:04:19.893 1+0 records out 00:04:19.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435064 s, 241 MB/s 00:04:19.893 12:30:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.893 12:30:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.893 12:30:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:19.893 12:30:19 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:19.893 12:30:19 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:19.893 No valid GPT data, bailing 00:04:19.893 12:30:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:19.893 12:30:19 -- scripts/common.sh@394 -- # pt= 00:04:19.893 12:30:19 -- scripts/common.sh@395 -- # return 1 00:04:19.893 12:30:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:19.893 1+0 records in 00:04:19.893 1+0 records out 00:04:19.893 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454339 s, 231 MB/s 00:04:19.893 12:30:19 -- spdk/autotest.sh@105 -- # sync 00:04:19.893 12:30:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:19.893 12:30:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:19.893 12:30:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:23.187 12:30:22 -- spdk/autotest.sh@111 -- # uname -s 00:04:23.187 12:30:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:23.187 12:30:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:23.187 12:30:22 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:23.446 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.447 Hugepages 00:04:23.447 node hugesize free / total 00:04:23.447 node0 1048576kB 0 / 0 00:04:23.447 node0 2048kB 0 / 0 00:04:23.447 00:04:23.447 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.447 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:23.447 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:23.706 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:23.706 12:30:23 -- spdk/autotest.sh@117 -- # uname -s 00:04:23.706 12:30:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:23.706 12:30:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:23.706 12:30:23 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.533 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.533 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.533 12:30:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:25.470 12:30:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:25.470 12:30:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:25.470 12:30:25 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:25.732 12:30:25 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:25.732 12:30:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:25.732 12:30:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:25.732 12:30:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.732 12:30:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:25.732 12:30:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:25.732 12:30:25 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:25.732 12:30:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:25.732 12:30:25 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:25.996 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.996 Waiting for block devices as requested 00:04:26.256 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.256 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.256 12:30:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.256 12:30:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:26.256 12:30:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:26.256 12:30:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:26.256 12:30:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:26.256 12:30:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:26.256 12:30:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:26.256 12:30:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:26.256 12:30:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:26.256 12:30:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:26.257 12:30:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:26.257 12:30:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.257 12:30:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.257 12:30:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.257 12:30:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.257 12:30:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.257 12:30:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:26.257 12:30:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.257 12:30:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.257 12:30:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.257 12:30:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.257 12:30:25 -- common/autotest_common.sh@1543 -- # continue 00:04:26.257 12:30:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:26.257 12:30:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:26.257 12:30:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:26.257 12:30:25 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:26.257 12:30:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:26.257 12:30:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:26.257 12:30:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:26.257 12:30:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:26.257 12:30:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:26.257 12:30:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:26.257 12:30:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:26.257 12:30:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:26.257 12:30:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:26.257 12:30:25 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:26.257 12:30:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:26.257 12:30:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:26.257 12:30:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:26.516 12:30:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:26.516 12:30:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:26.516 12:30:26 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:26.516 12:30:26 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:26.516 12:30:26 -- common/autotest_common.sh@1543 -- # continue 00:04:26.516 12:30:26 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:26.516 12:30:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:26.516 12:30:26 -- common/autotest_common.sh@10 -- # set +x 00:04:26.516 12:30:26 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:26.516 12:30:26 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:26.516 12:30:26 -- common/autotest_common.sh@10 -- # set +x 00:04:26.516 12:30:26 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:27.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:27.344 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.344 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:27.344 12:30:27 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:27.344 12:30:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:27.344 12:30:27 -- common/autotest_common.sh@10 -- # set +x 00:04:27.345 12:30:27 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:27.345 12:30:27 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:27.345 12:30:27 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:27.345 12:30:27 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:27.345 12:30:27 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:27.345 12:30:27 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:27.345 12:30:27 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:27.345 12:30:27 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:27.345 12:30:27 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:27.345 12:30:27 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:27.345 12:30:27 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:27.345 12:30:27 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:27.345 12:30:27 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:27.604 12:30:27 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:27.604 12:30:27 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:27.604 12:30:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:27.604 12:30:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:27.604 12:30:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:27.604 12:30:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.604 12:30:27 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:27.604 12:30:27 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:27.604 12:30:27 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:27.604 12:30:27 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:27.604 12:30:27 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:27.604 12:30:27 -- common/autotest_common.sh@1572 -- # return 0 00:04:27.604 12:30:27 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:27.604 12:30:27 -- common/autotest_common.sh@1580 -- # return 0 00:04:27.604 12:30:27 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:27.604 12:30:27 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:27.604 12:30:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.604 12:30:27 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.604 12:30:27 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:27.604 12:30:27 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.604 12:30:27 -- common/autotest_common.sh@10 -- # set +x 00:04:27.604 12:30:27 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:27.604 12:30:27 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:27.604 12:30:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.604 12:30:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.604 12:30:27 -- common/autotest_common.sh@10 -- # set +x 00:04:27.604 ************************************ 00:04:27.604 START TEST env 00:04:27.604 ************************************ 00:04:27.604 12:30:27 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:27.604 * Looking for test storage... 00:04:27.604 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:27.604 12:30:27 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.604 12:30:27 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.604 12:30:27 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.865 12:30:27 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.865 12:30:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.865 12:30:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.865 12:30:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.865 12:30:27 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.865 12:30:27 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.865 12:30:27 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.865 12:30:27 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.865 12:30:27 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.865 12:30:27 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.865 12:30:27 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.865 12:30:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.865 12:30:27 env -- scripts/common.sh@344 -- # case "$op" in 00:04:27.865 12:30:27 env -- scripts/common.sh@345 -- # : 1 00:04:27.865 12:30:27 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.865 12:30:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.865 12:30:27 env -- scripts/common.sh@365 -- # decimal 1 00:04:27.865 12:30:27 env -- scripts/common.sh@353 -- # local d=1 00:04:27.865 12:30:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.865 12:30:27 env -- scripts/common.sh@355 -- # echo 1 00:04:27.865 12:30:27 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.865 12:30:27 env -- scripts/common.sh@366 -- # decimal 2 00:04:27.865 12:30:27 env -- scripts/common.sh@353 -- # local d=2 00:04:27.865 12:30:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.865 12:30:27 env -- scripts/common.sh@355 -- # echo 2 00:04:27.865 12:30:27 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.865 12:30:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.865 12:30:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.865 12:30:27 env -- scripts/common.sh@368 -- # return 0 00:04:27.865 12:30:27 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.865 12:30:27 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.865 --rc genhtml_branch_coverage=1 00:04:27.865 --rc genhtml_function_coverage=1 00:04:27.865 --rc genhtml_legend=1 00:04:27.865 --rc geninfo_all_blocks=1 00:04:27.865 --rc geninfo_unexecuted_blocks=1 00:04:27.865 00:04:27.865 ' 00:04:27.865 12:30:27 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.865 --rc genhtml_branch_coverage=1 00:04:27.865 --rc genhtml_function_coverage=1 00:04:27.865 --rc genhtml_legend=1 00:04:27.865 --rc geninfo_all_blocks=1 00:04:27.865 --rc geninfo_unexecuted_blocks=1 00:04:27.865 00:04:27.865 ' 00:04:27.865 12:30:27 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.865 --rc genhtml_branch_coverage=1 00:04:27.865 --rc genhtml_function_coverage=1 00:04:27.865 --rc genhtml_legend=1 00:04:27.865 --rc geninfo_all_blocks=1 00:04:27.865 --rc geninfo_unexecuted_blocks=1 00:04:27.865 00:04:27.865 ' 00:04:27.865 12:30:27 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.865 --rc genhtml_branch_coverage=1 00:04:27.865 --rc genhtml_function_coverage=1 00:04:27.865 --rc genhtml_legend=1 00:04:27.865 --rc geninfo_all_blocks=1 00:04:27.865 --rc geninfo_unexecuted_blocks=1 00:04:27.865 00:04:27.865 ' 00:04:27.865 12:30:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:27.865 12:30:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.865 12:30:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.865 12:30:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.865 ************************************ 00:04:27.865 START TEST env_memory 00:04:27.865 ************************************ 00:04:27.865 12:30:27 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:27.865 00:04:27.865 00:04:27.865 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.865 http://cunit.sourceforge.net/ 00:04:27.865 00:04:27.865 00:04:27.865 Suite: memory 00:04:27.865 Test: alloc and free memory map ...[2024-12-14 12:30:27.450645] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:27.865 passed 00:04:27.865 Test: mem map translation ...[2024-12-14 12:30:27.513349] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:27.865 [2024-12-14 12:30:27.513500] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:27.865 [2024-12-14 12:30:27.513631] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:27.865 [2024-12-14 12:30:27.513685] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:27.865 passed 00:04:27.865 Test: mem map registration ...[2024-12-14 12:30:27.588786] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:27.865 [2024-12-14 12:30:27.588918] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:28.125 passed 00:04:28.125 Test: mem map adjacent registrations ...passed 00:04:28.125 00:04:28.125 Run Summary: Type Total Ran Passed Failed Inactive 00:04:28.125 suites 1 1 n/a 0 0 00:04:28.125 tests 4 4 4 0 0 00:04:28.125 asserts 152 152 152 0 n/a 00:04:28.125 00:04:28.125 Elapsed time = 0.296 seconds 00:04:28.125 00:04:28.125 real 0m0.334s 00:04:28.125 user 0m0.300s 00:04:28.125 sys 0m0.022s 00:04:28.125 12:30:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.125 12:30:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:28.125 ************************************ 00:04:28.125 END TEST env_memory 00:04:28.125 ************************************ 00:04:28.125 12:30:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:28.125 12:30:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.125 12:30:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.125 12:30:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:28.125 ************************************ 00:04:28.125 START TEST env_vtophys 00:04:28.125 ************************************ 00:04:28.125 12:30:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:28.125 EAL: lib.eal log level changed from notice to debug 00:04:28.125 EAL: Detected lcore 0 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 1 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 2 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 3 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 4 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 5 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 6 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 7 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 8 as core 0 on socket 0 00:04:28.125 EAL: Detected lcore 9 as core 0 on socket 0 00:04:28.125 EAL: Maximum logical cores by configuration: 128 00:04:28.125 EAL: Detected CPU lcores: 10 00:04:28.125 EAL: Detected NUMA nodes: 1 00:04:28.125 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:28.125 EAL: Detected shared linkage of DPDK 00:04:28.125 EAL: No shared files mode enabled, IPC will be disabled 00:04:28.125 EAL: Selected IOVA mode 'PA' 00:04:28.125 EAL: Probing VFIO support... 00:04:28.125 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.125 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:28.125 EAL: Ask a virtual area of 0x2e000 bytes 00:04:28.125 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:28.125 EAL: Setting up physically contiguous memory... 00:04:28.125 EAL: Setting maximum number of open files to 524288 00:04:28.125 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:28.125 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:28.125 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.125 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:28.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.125 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.125 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:28.125 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:28.125 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.125 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:28.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.125 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.125 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:28.125 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:28.125 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.125 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:28.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.125 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.125 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:28.125 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:28.125 EAL: Ask a virtual area of 0x61000 bytes 00:04:28.125 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:28.125 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:28.125 EAL: Ask a virtual area of 0x400000000 bytes 00:04:28.125 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:28.125 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:28.125 EAL: Hugepages will be freed exactly as allocated. 00:04:28.125 EAL: No shared files mode enabled, IPC is disabled 00:04:28.125 EAL: No shared files mode enabled, IPC is disabled 00:04:28.385 EAL: TSC frequency is ~2290000 KHz 00:04:28.385 EAL: Main lcore 0 is ready (tid=7fb19f419a40;cpuset=[0]) 00:04:28.385 EAL: Trying to obtain current memory policy. 00:04:28.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.385 EAL: Restoring previous memory policy: 0 00:04:28.385 EAL: request: mp_malloc_sync 00:04:28.385 EAL: No shared files mode enabled, IPC is disabled 00:04:28.385 EAL: Heap on socket 0 was expanded by 2MB 00:04:28.385 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:28.385 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:28.385 EAL: Mem event callback 'spdk:(nil)' registered 00:04:28.385 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:28.385 00:04:28.385 00:04:28.385 CUnit - A unit testing framework for C - Version 2.1-3 00:04:28.385 http://cunit.sourceforge.net/ 00:04:28.385 00:04:28.385 00:04:28.385 Suite: components_suite 00:04:28.953 Test: vtophys_malloc_test ...passed 00:04:28.953 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:28.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.953 EAL: Restoring previous memory policy: 4 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was expanded by 4MB 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was shrunk by 4MB 00:04:28.953 EAL: Trying to obtain current memory policy. 00:04:28.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.953 EAL: Restoring previous memory policy: 4 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was expanded by 6MB 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was shrunk by 6MB 00:04:28.953 EAL: Trying to obtain current memory policy. 00:04:28.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.953 EAL: Restoring previous memory policy: 4 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was expanded by 10MB 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was shrunk by 10MB 00:04:28.953 EAL: Trying to obtain current memory policy. 00:04:28.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.953 EAL: Restoring previous memory policy: 4 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was expanded by 18MB 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was shrunk by 18MB 00:04:28.953 EAL: Trying to obtain current memory policy. 00:04:28.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.953 EAL: Restoring previous memory policy: 4 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was expanded by 34MB 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was shrunk by 34MB 00:04:28.953 EAL: Trying to obtain current memory policy. 00:04:28.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.953 EAL: Restoring previous memory policy: 4 00:04:28.953 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.953 EAL: request: mp_malloc_sync 00:04:28.953 EAL: No shared files mode enabled, IPC is disabled 00:04:28.953 EAL: Heap on socket 0 was expanded by 66MB 00:04:29.213 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.213 EAL: request: mp_malloc_sync 00:04:29.213 EAL: No shared files mode enabled, IPC is disabled 00:04:29.213 EAL: Heap on socket 0 was shrunk by 66MB 00:04:29.213 EAL: Trying to obtain current memory policy. 00:04:29.213 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.472 EAL: Restoring previous memory policy: 4 00:04:29.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.472 EAL: request: mp_malloc_sync 00:04:29.472 EAL: No shared files mode enabled, IPC is disabled 00:04:29.472 EAL: Heap on socket 0 was expanded by 130MB 00:04:29.472 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.472 EAL: request: mp_malloc_sync 00:04:29.472 EAL: No shared files mode enabled, IPC is disabled 00:04:29.472 EAL: Heap on socket 0 was shrunk by 130MB 00:04:29.730 EAL: Trying to obtain current memory policy. 00:04:29.730 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:29.989 EAL: Restoring previous memory policy: 4 00:04:29.989 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.989 EAL: request: mp_malloc_sync 00:04:29.989 EAL: No shared files mode enabled, IPC is disabled 00:04:29.989 EAL: Heap on socket 0 was expanded by 258MB 00:04:30.248 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.248 EAL: request: mp_malloc_sync 00:04:30.248 EAL: No shared files mode enabled, IPC is disabled 00:04:30.248 EAL: Heap on socket 0 was shrunk by 258MB 00:04:30.817 EAL: Trying to obtain current memory policy. 00:04:30.817 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:30.817 EAL: Restoring previous memory policy: 4 00:04:30.817 EAL: Calling mem event callback 'spdk:(nil)' 00:04:30.817 EAL: request: mp_malloc_sync 00:04:30.817 EAL: No shared files mode enabled, IPC is disabled 00:04:30.817 EAL: Heap on socket 0 was expanded by 514MB 00:04:31.753 EAL: Calling mem event callback 'spdk:(nil)' 00:04:31.753 EAL: request: mp_malloc_sync 00:04:31.753 EAL: No shared files mode enabled, IPC is disabled 00:04:31.753 EAL: Heap on socket 0 was shrunk by 514MB 00:04:32.690 EAL: Trying to obtain current memory policy. 00:04:32.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:32.949 EAL: Restoring previous memory policy: 4 00:04:32.949 EAL: Calling mem event callback 'spdk:(nil)' 00:04:32.949 EAL: request: mp_malloc_sync 00:04:32.949 EAL: No shared files mode enabled, IPC is disabled 00:04:32.949 EAL: Heap on socket 0 was expanded by 1026MB 00:04:34.924 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.924 EAL: request: mp_malloc_sync 00:04:34.924 EAL: No shared files mode enabled, IPC is disabled 00:04:34.924 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:36.835 passed 00:04:36.835 00:04:36.835 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.835 suites 1 1 n/a 0 0 00:04:36.835 tests 2 2 2 0 0 00:04:36.835 asserts 5677 5677 5677 0 n/a 00:04:36.835 00:04:36.835 Elapsed time = 8.010 seconds 00:04:36.835 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.835 EAL: request: mp_malloc_sync 00:04:36.835 EAL: No shared files mode enabled, IPC is disabled 00:04:36.835 EAL: Heap on socket 0 was shrunk by 2MB 00:04:36.835 EAL: No shared files mode enabled, IPC is disabled 00:04:36.835 EAL: No shared files mode enabled, IPC is disabled 00:04:36.835 EAL: No shared files mode enabled, IPC is disabled 00:04:36.835 00:04:36.835 real 0m8.342s 00:04:36.835 user 0m7.326s 00:04:36.835 sys 0m0.860s 00:04:36.835 ************************************ 00:04:36.835 END TEST env_vtophys 00:04:36.835 ************************************ 00:04:36.835 12:30:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.835 12:30:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:36.835 12:30:36 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.835 12:30:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.835 12:30:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.835 12:30:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.835 ************************************ 00:04:36.835 START TEST env_pci 00:04:36.835 ************************************ 00:04:36.835 12:30:36 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:36.835 00:04:36.835 00:04:36.835 CUnit - A unit testing framework for C - Version 2.1-3 00:04:36.835 http://cunit.sourceforge.net/ 00:04:36.835 00:04:36.835 00:04:36.835 Suite: pci 00:04:36.835 Test: pci_hook ...[2024-12-14 12:30:36.213814] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58610 has claimed it 00:04:36.835 passed 00:04:36.835 00:04:36.835 Run Summary: Type Total Ran Passed Failed Inactive 00:04:36.835 suites 1 1 n/a 0 0 00:04:36.835 tests 1 1 1 0 0 00:04:36.835 asserts 25 25 25 0 n/a 00:04:36.835 00:04:36.835 Elapsed time = 0.007 seconds 00:04:36.835 EAL: Cannot find device (10000:00:01.0) 00:04:36.835 EAL: Failed to attach device on primary process 00:04:36.835 00:04:36.835 real 0m0.104s 00:04:36.835 user 0m0.053s 00:04:36.835 sys 0m0.049s 00:04:36.835 ************************************ 00:04:36.835 END TEST env_pci 00:04:36.835 ************************************ 00:04:36.835 12:30:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.835 12:30:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:36.835 12:30:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:36.835 12:30:36 env -- env/env.sh@15 -- # uname 00:04:36.835 12:30:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:36.835 12:30:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:36.835 12:30:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.835 12:30:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:36.835 12:30:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.835 12:30:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:36.835 ************************************ 00:04:36.835 START TEST env_dpdk_post_init 00:04:36.835 ************************************ 00:04:36.835 12:30:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:36.835 EAL: Detected CPU lcores: 10 00:04:36.835 EAL: Detected NUMA nodes: 1 00:04:36.835 EAL: Detected shared linkage of DPDK 00:04:36.835 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:36.835 EAL: Selected IOVA mode 'PA' 00:04:36.836 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.095 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:37.095 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:37.095 Starting DPDK initialization... 00:04:37.095 Starting SPDK post initialization... 00:04:37.095 SPDK NVMe probe 00:04:37.095 Attaching to 0000:00:10.0 00:04:37.095 Attaching to 0000:00:11.0 00:04:37.095 Attached to 0000:00:10.0 00:04:37.095 Attached to 0000:00:11.0 00:04:37.095 Cleaning up... 00:04:37.095 00:04:37.095 real 0m0.278s 00:04:37.095 user 0m0.085s 00:04:37.095 sys 0m0.095s 00:04:37.095 12:30:36 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.095 12:30:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:37.095 ************************************ 00:04:37.095 END TEST env_dpdk_post_init 00:04:37.095 ************************************ 00:04:37.095 12:30:36 env -- env/env.sh@26 -- # uname 00:04:37.095 12:30:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:37.095 12:30:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.095 12:30:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.095 12:30:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.095 12:30:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.095 ************************************ 00:04:37.095 START TEST env_mem_callbacks 00:04:37.095 ************************************ 00:04:37.095 12:30:36 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:37.095 EAL: Detected CPU lcores: 10 00:04:37.095 EAL: Detected NUMA nodes: 1 00:04:37.095 EAL: Detected shared linkage of DPDK 00:04:37.095 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:37.095 EAL: Selected IOVA mode 'PA' 00:04:37.355 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:37.355 00:04:37.355 00:04:37.355 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.355 http://cunit.sourceforge.net/ 00:04:37.355 00:04:37.355 00:04:37.355 Suite: memory 00:04:37.355 Test: test ... 00:04:37.355 register 0x200000200000 2097152 00:04:37.355 malloc 3145728 00:04:37.355 register 0x200000400000 4194304 00:04:37.355 buf 0x2000004fffc0 len 3145728 PASSED 00:04:37.355 malloc 64 00:04:37.355 buf 0x2000004ffec0 len 64 PASSED 00:04:37.355 malloc 4194304 00:04:37.355 register 0x200000800000 6291456 00:04:37.355 buf 0x2000009fffc0 len 4194304 PASSED 00:04:37.355 free 0x2000004fffc0 3145728 00:04:37.355 free 0x2000004ffec0 64 00:04:37.355 unregister 0x200000400000 4194304 PASSED 00:04:37.355 free 0x2000009fffc0 4194304 00:04:37.355 unregister 0x200000800000 6291456 PASSED 00:04:37.355 malloc 8388608 00:04:37.355 register 0x200000400000 10485760 00:04:37.355 buf 0x2000005fffc0 len 8388608 PASSED 00:04:37.355 free 0x2000005fffc0 8388608 00:04:37.355 unregister 0x200000400000 10485760 PASSED 00:04:37.355 passed 00:04:37.355 00:04:37.355 Run Summary: Type Total Ran Passed Failed Inactive 00:04:37.355 suites 1 1 n/a 0 0 00:04:37.355 tests 1 1 1 0 0 00:04:37.355 asserts 15 15 15 0 n/a 00:04:37.355 00:04:37.355 Elapsed time = 0.083 seconds 00:04:37.355 00:04:37.355 real 0m0.283s 00:04:37.355 user 0m0.110s 00:04:37.355 sys 0m0.071s 00:04:37.355 12:30:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.355 12:30:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:37.355 ************************************ 00:04:37.355 END TEST env_mem_callbacks 00:04:37.355 ************************************ 00:04:37.355 00:04:37.355 real 0m9.824s 00:04:37.355 user 0m8.085s 00:04:37.355 sys 0m1.380s 00:04:37.355 12:30:37 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.355 12:30:37 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.355 ************************************ 00:04:37.355 END TEST env 00:04:37.355 ************************************ 00:04:37.355 12:30:37 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.355 12:30:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.355 12:30:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.355 12:30:37 -- common/autotest_common.sh@10 -- # set +x 00:04:37.355 ************************************ 00:04:37.355 START TEST rpc 00:04:37.355 ************************************ 00:04:37.355 12:30:37 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:37.614 * Looking for test storage... 00:04:37.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:37.614 12:30:37 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.614 12:30:37 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.614 12:30:37 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:37.614 12:30:37 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:37.614 12:30:37 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.614 12:30:37 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.614 12:30:37 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.614 12:30:37 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.614 12:30:37 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.614 12:30:37 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.614 12:30:37 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.614 12:30:37 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.614 12:30:37 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.614 12:30:37 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.614 12:30:37 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.614 12:30:37 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:37.614 12:30:37 rpc -- scripts/common.sh@345 -- # : 1 00:04:37.614 12:30:37 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.614 12:30:37 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.614 12:30:37 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:37.614 12:30:37 rpc -- scripts/common.sh@353 -- # local d=1 00:04:37.615 12:30:37 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.615 12:30:37 rpc -- scripts/common.sh@355 -- # echo 1 00:04:37.615 12:30:37 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.615 12:30:37 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:37.615 12:30:37 rpc -- scripts/common.sh@353 -- # local d=2 00:04:37.615 12:30:37 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.615 12:30:37 rpc -- scripts/common.sh@355 -- # echo 2 00:04:37.615 12:30:37 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.615 12:30:37 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.615 12:30:37 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.615 12:30:37 rpc -- scripts/common.sh@368 -- # return 0 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:37.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.615 --rc genhtml_branch_coverage=1 00:04:37.615 --rc genhtml_function_coverage=1 00:04:37.615 --rc genhtml_legend=1 00:04:37.615 --rc geninfo_all_blocks=1 00:04:37.615 --rc geninfo_unexecuted_blocks=1 00:04:37.615 00:04:37.615 ' 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:37.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.615 --rc genhtml_branch_coverage=1 00:04:37.615 --rc genhtml_function_coverage=1 00:04:37.615 --rc genhtml_legend=1 00:04:37.615 --rc geninfo_all_blocks=1 00:04:37.615 --rc geninfo_unexecuted_blocks=1 00:04:37.615 00:04:37.615 ' 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:37.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.615 --rc genhtml_branch_coverage=1 00:04:37.615 --rc genhtml_function_coverage=1 00:04:37.615 --rc genhtml_legend=1 00:04:37.615 --rc geninfo_all_blocks=1 00:04:37.615 --rc geninfo_unexecuted_blocks=1 00:04:37.615 00:04:37.615 ' 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:37.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.615 --rc genhtml_branch_coverage=1 00:04:37.615 --rc genhtml_function_coverage=1 00:04:37.615 --rc genhtml_legend=1 00:04:37.615 --rc geninfo_all_blocks=1 00:04:37.615 --rc geninfo_unexecuted_blocks=1 00:04:37.615 00:04:37.615 ' 00:04:37.615 12:30:37 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58737 00:04:37.615 12:30:37 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:37.615 12:30:37 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.615 12:30:37 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58737 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@835 -- # '[' -z 58737 ']' 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:37.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:37.615 12:30:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:37.874 [2024-12-14 12:30:37.420967] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:37.874 [2024-12-14 12:30:37.421175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58737 ] 00:04:37.874 [2024-12-14 12:30:37.597308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.133 [2024-12-14 12:30:37.711153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:38.133 [2024-12-14 12:30:37.711274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58737' to capture a snapshot of events at runtime. 00:04:38.133 [2024-12-14 12:30:37.711317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:38.133 [2024-12-14 12:30:37.711350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:38.133 [2024-12-14 12:30:37.711369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58737 for offline analysis/debug. 00:04:38.133 [2024-12-14 12:30:37.712581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.070 12:30:38 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:39.070 12:30:38 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:39.070 12:30:38 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.070 12:30:38 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.070 12:30:38 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:39.070 12:30:38 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:39.070 12:30:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.070 12:30:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.070 12:30:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.070 ************************************ 00:04:39.070 START TEST rpc_integrity 00:04:39.070 ************************************ 00:04:39.070 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:39.070 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.070 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.070 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.070 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.070 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.070 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.070 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:39.071 { 00:04:39.071 "name": "Malloc0", 00:04:39.071 "aliases": [ 00:04:39.071 "28a8eba7-e128-4b2d-be3b-7e2185abffc1" 00:04:39.071 ], 00:04:39.071 "product_name": "Malloc disk", 00:04:39.071 "block_size": 512, 00:04:39.071 "num_blocks": 16384, 00:04:39.071 "uuid": "28a8eba7-e128-4b2d-be3b-7e2185abffc1", 00:04:39.071 "assigned_rate_limits": { 00:04:39.071 "rw_ios_per_sec": 0, 00:04:39.071 "rw_mbytes_per_sec": 0, 00:04:39.071 "r_mbytes_per_sec": 0, 00:04:39.071 "w_mbytes_per_sec": 0 00:04:39.071 }, 00:04:39.071 "claimed": false, 00:04:39.071 "zoned": false, 00:04:39.071 "supported_io_types": { 00:04:39.071 "read": true, 00:04:39.071 "write": true, 00:04:39.071 "unmap": true, 00:04:39.071 "flush": true, 00:04:39.071 "reset": true, 00:04:39.071 "nvme_admin": false, 00:04:39.071 "nvme_io": false, 00:04:39.071 "nvme_io_md": false, 00:04:39.071 "write_zeroes": true, 00:04:39.071 "zcopy": true, 00:04:39.071 "get_zone_info": false, 00:04:39.071 "zone_management": false, 00:04:39.071 "zone_append": false, 00:04:39.071 "compare": false, 00:04:39.071 "compare_and_write": false, 00:04:39.071 "abort": true, 00:04:39.071 "seek_hole": false, 00:04:39.071 "seek_data": false, 00:04:39.071 "copy": true, 00:04:39.071 "nvme_iov_md": false 00:04:39.071 }, 00:04:39.071 "memory_domains": [ 00:04:39.071 { 00:04:39.071 "dma_device_id": "system", 00:04:39.071 "dma_device_type": 1 00:04:39.071 }, 00:04:39.071 { 00:04:39.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.071 "dma_device_type": 2 00:04:39.071 } 00:04:39.071 ], 00:04:39.071 "driver_specific": {} 00:04:39.071 } 00:04:39.071 ]' 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.071 [2024-12-14 12:30:38.752611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:39.071 [2024-12-14 12:30:38.752672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:39.071 [2024-12-14 12:30:38.752697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:39.071 [2024-12-14 12:30:38.752716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:39.071 [2024-12-14 12:30:38.755007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:39.071 [2024-12-14 12:30:38.755063] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:39.071 Passthru0 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.071 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:39.071 { 00:04:39.071 "name": "Malloc0", 00:04:39.071 "aliases": [ 00:04:39.071 "28a8eba7-e128-4b2d-be3b-7e2185abffc1" 00:04:39.071 ], 00:04:39.071 "product_name": "Malloc disk", 00:04:39.071 "block_size": 512, 00:04:39.071 "num_blocks": 16384, 00:04:39.071 "uuid": "28a8eba7-e128-4b2d-be3b-7e2185abffc1", 00:04:39.071 "assigned_rate_limits": { 00:04:39.071 "rw_ios_per_sec": 0, 00:04:39.071 "rw_mbytes_per_sec": 0, 00:04:39.071 "r_mbytes_per_sec": 0, 00:04:39.071 "w_mbytes_per_sec": 0 00:04:39.071 }, 00:04:39.071 "claimed": true, 00:04:39.071 "claim_type": "exclusive_write", 00:04:39.071 "zoned": false, 00:04:39.071 "supported_io_types": { 00:04:39.071 "read": true, 00:04:39.071 "write": true, 00:04:39.071 "unmap": true, 00:04:39.071 "flush": true, 00:04:39.071 "reset": true, 00:04:39.071 "nvme_admin": false, 00:04:39.071 "nvme_io": false, 00:04:39.071 "nvme_io_md": false, 00:04:39.071 "write_zeroes": true, 00:04:39.071 "zcopy": true, 00:04:39.071 "get_zone_info": false, 00:04:39.071 "zone_management": false, 00:04:39.071 "zone_append": false, 00:04:39.071 "compare": false, 00:04:39.071 "compare_and_write": false, 00:04:39.071 "abort": true, 00:04:39.071 "seek_hole": false, 00:04:39.071 "seek_data": false, 00:04:39.071 "copy": true, 00:04:39.071 "nvme_iov_md": false 00:04:39.071 }, 00:04:39.071 "memory_domains": [ 00:04:39.071 { 00:04:39.071 "dma_device_id": "system", 00:04:39.071 "dma_device_type": 1 00:04:39.071 }, 00:04:39.071 { 00:04:39.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.071 "dma_device_type": 2 00:04:39.071 } 00:04:39.071 ], 00:04:39.071 "driver_specific": {} 00:04:39.071 }, 00:04:39.071 { 00:04:39.071 "name": "Passthru0", 00:04:39.071 "aliases": [ 00:04:39.071 "00db6872-5906-59b7-887e-6335b8bf8469" 00:04:39.071 ], 00:04:39.071 "product_name": "passthru", 00:04:39.071 "block_size": 512, 00:04:39.071 "num_blocks": 16384, 00:04:39.071 "uuid": "00db6872-5906-59b7-887e-6335b8bf8469", 00:04:39.071 "assigned_rate_limits": { 00:04:39.071 "rw_ios_per_sec": 0, 00:04:39.071 "rw_mbytes_per_sec": 0, 00:04:39.071 "r_mbytes_per_sec": 0, 00:04:39.071 "w_mbytes_per_sec": 0 00:04:39.071 }, 00:04:39.071 "claimed": false, 00:04:39.071 "zoned": false, 00:04:39.071 "supported_io_types": { 00:04:39.071 "read": true, 00:04:39.071 "write": true, 00:04:39.071 "unmap": true, 00:04:39.071 "flush": true, 00:04:39.071 "reset": true, 00:04:39.071 "nvme_admin": false, 00:04:39.071 "nvme_io": false, 00:04:39.071 "nvme_io_md": false, 00:04:39.071 "write_zeroes": true, 00:04:39.071 "zcopy": true, 00:04:39.071 "get_zone_info": false, 00:04:39.071 "zone_management": false, 00:04:39.071 "zone_append": false, 00:04:39.071 "compare": false, 00:04:39.071 "compare_and_write": false, 00:04:39.071 "abort": true, 00:04:39.071 "seek_hole": false, 00:04:39.071 "seek_data": false, 00:04:39.071 "copy": true, 00:04:39.071 "nvme_iov_md": false 00:04:39.071 }, 00:04:39.071 "memory_domains": [ 00:04:39.071 { 00:04:39.071 "dma_device_id": "system", 00:04:39.071 "dma_device_type": 1 00:04:39.071 }, 00:04:39.071 { 00:04:39.071 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.071 "dma_device_type": 2 00:04:39.071 } 00:04:39.071 ], 00:04:39.071 "driver_specific": { 00:04:39.071 "passthru": { 00:04:39.071 "name": "Passthru0", 00:04:39.071 "base_bdev_name": "Malloc0" 00:04:39.071 } 00:04:39.071 } 00:04:39.071 } 00:04:39.071 ]' 00:04:39.071 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:39.331 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:39.331 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.331 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.331 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.331 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:39.331 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:39.331 ************************************ 00:04:39.331 END TEST rpc_integrity 00:04:39.331 ************************************ 00:04:39.331 12:30:38 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:39.331 00:04:39.331 real 0m0.342s 00:04:39.331 user 0m0.188s 00:04:39.331 sys 0m0.040s 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.331 12:30:38 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.331 12:30:38 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:39.331 12:30:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.331 12:30:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.331 12:30:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.331 ************************************ 00:04:39.331 START TEST rpc_plugins 00:04:39.331 ************************************ 00:04:39.331 12:30:38 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:39.331 12:30:38 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:39.331 12:30:38 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.331 12:30:38 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.331 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.331 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:39.331 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:39.331 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.331 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.331 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.331 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:39.331 { 00:04:39.331 "name": "Malloc1", 00:04:39.331 "aliases": [ 00:04:39.331 "3dd6ed2e-3566-4b25-b973-cb36018e6001" 00:04:39.331 ], 00:04:39.331 "product_name": "Malloc disk", 00:04:39.331 "block_size": 4096, 00:04:39.331 "num_blocks": 256, 00:04:39.331 "uuid": "3dd6ed2e-3566-4b25-b973-cb36018e6001", 00:04:39.331 "assigned_rate_limits": { 00:04:39.331 "rw_ios_per_sec": 0, 00:04:39.331 "rw_mbytes_per_sec": 0, 00:04:39.331 "r_mbytes_per_sec": 0, 00:04:39.331 "w_mbytes_per_sec": 0 00:04:39.331 }, 00:04:39.331 "claimed": false, 00:04:39.331 "zoned": false, 00:04:39.331 "supported_io_types": { 00:04:39.331 "read": true, 00:04:39.331 "write": true, 00:04:39.331 "unmap": true, 00:04:39.331 "flush": true, 00:04:39.331 "reset": true, 00:04:39.331 "nvme_admin": false, 00:04:39.331 "nvme_io": false, 00:04:39.331 "nvme_io_md": false, 00:04:39.331 "write_zeroes": true, 00:04:39.331 "zcopy": true, 00:04:39.331 "get_zone_info": false, 00:04:39.331 "zone_management": false, 00:04:39.331 "zone_append": false, 00:04:39.331 "compare": false, 00:04:39.331 "compare_and_write": false, 00:04:39.331 "abort": true, 00:04:39.331 "seek_hole": false, 00:04:39.331 "seek_data": false, 00:04:39.331 "copy": true, 00:04:39.331 "nvme_iov_md": false 00:04:39.331 }, 00:04:39.331 "memory_domains": [ 00:04:39.331 { 00:04:39.331 "dma_device_id": "system", 00:04:39.331 "dma_device_type": 1 00:04:39.331 }, 00:04:39.331 { 00:04:39.331 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:39.331 "dma_device_type": 2 00:04:39.331 } 00:04:39.331 ], 00:04:39.331 "driver_specific": {} 00:04:39.331 } 00:04:39.331 ]' 00:04:39.331 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:39.591 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:39.591 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:39.591 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.591 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.591 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.591 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:39.591 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.591 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.591 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.591 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:39.591 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:39.591 ************************************ 00:04:39.591 END TEST rpc_plugins 00:04:39.591 ************************************ 00:04:39.591 12:30:39 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:39.591 00:04:39.591 real 0m0.172s 00:04:39.591 user 0m0.099s 00:04:39.591 sys 0m0.026s 00:04:39.591 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.591 12:30:39 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:39.591 12:30:39 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:39.591 12:30:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.591 12:30:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.591 12:30:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.591 ************************************ 00:04:39.591 START TEST rpc_trace_cmd_test 00:04:39.591 ************************************ 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:39.591 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58737", 00:04:39.591 "tpoint_group_mask": "0x8", 00:04:39.591 "iscsi_conn": { 00:04:39.591 "mask": "0x2", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "scsi": { 00:04:39.591 "mask": "0x4", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "bdev": { 00:04:39.591 "mask": "0x8", 00:04:39.591 "tpoint_mask": "0xffffffffffffffff" 00:04:39.591 }, 00:04:39.591 "nvmf_rdma": { 00:04:39.591 "mask": "0x10", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "nvmf_tcp": { 00:04:39.591 "mask": "0x20", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "ftl": { 00:04:39.591 "mask": "0x40", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "blobfs": { 00:04:39.591 "mask": "0x80", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "dsa": { 00:04:39.591 "mask": "0x200", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "thread": { 00:04:39.591 "mask": "0x400", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "nvme_pcie": { 00:04:39.591 "mask": "0x800", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "iaa": { 00:04:39.591 "mask": "0x1000", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "nvme_tcp": { 00:04:39.591 "mask": "0x2000", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "bdev_nvme": { 00:04:39.591 "mask": "0x4000", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "sock": { 00:04:39.591 "mask": "0x8000", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "blob": { 00:04:39.591 "mask": "0x10000", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "bdev_raid": { 00:04:39.591 "mask": "0x20000", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 }, 00:04:39.591 "scheduler": { 00:04:39.591 "mask": "0x40000", 00:04:39.591 "tpoint_mask": "0x0" 00:04:39.591 } 00:04:39.591 }' 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:39.591 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:39.851 ************************************ 00:04:39.851 END TEST rpc_trace_cmd_test 00:04:39.851 ************************************ 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:39.851 00:04:39.851 real 0m0.230s 00:04:39.851 user 0m0.182s 00:04:39.851 sys 0m0.037s 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.851 12:30:39 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:39.851 12:30:39 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:39.851 12:30:39 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:39.851 12:30:39 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:39.851 12:30:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.851 12:30:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.851 12:30:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.851 ************************************ 00:04:39.851 START TEST rpc_daemon_integrity 00:04:39.851 ************************************ 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:39.851 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:40.111 { 00:04:40.111 "name": "Malloc2", 00:04:40.111 "aliases": [ 00:04:40.111 "2865cbb6-f810-4f91-be0d-4ebc3b5c6c93" 00:04:40.111 ], 00:04:40.111 "product_name": "Malloc disk", 00:04:40.111 "block_size": 512, 00:04:40.111 "num_blocks": 16384, 00:04:40.111 "uuid": "2865cbb6-f810-4f91-be0d-4ebc3b5c6c93", 00:04:40.111 "assigned_rate_limits": { 00:04:40.111 "rw_ios_per_sec": 0, 00:04:40.111 "rw_mbytes_per_sec": 0, 00:04:40.111 "r_mbytes_per_sec": 0, 00:04:40.111 "w_mbytes_per_sec": 0 00:04:40.111 }, 00:04:40.111 "claimed": false, 00:04:40.111 "zoned": false, 00:04:40.111 "supported_io_types": { 00:04:40.111 "read": true, 00:04:40.111 "write": true, 00:04:40.111 "unmap": true, 00:04:40.111 "flush": true, 00:04:40.111 "reset": true, 00:04:40.111 "nvme_admin": false, 00:04:40.111 "nvme_io": false, 00:04:40.111 "nvme_io_md": false, 00:04:40.111 "write_zeroes": true, 00:04:40.111 "zcopy": true, 00:04:40.111 "get_zone_info": false, 00:04:40.111 "zone_management": false, 00:04:40.111 "zone_append": false, 00:04:40.111 "compare": false, 00:04:40.111 "compare_and_write": false, 00:04:40.111 "abort": true, 00:04:40.111 "seek_hole": false, 00:04:40.111 "seek_data": false, 00:04:40.111 "copy": true, 00:04:40.111 "nvme_iov_md": false 00:04:40.111 }, 00:04:40.111 "memory_domains": [ 00:04:40.111 { 00:04:40.111 "dma_device_id": "system", 00:04:40.111 "dma_device_type": 1 00:04:40.111 }, 00:04:40.111 { 00:04:40.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.111 "dma_device_type": 2 00:04:40.111 } 00:04:40.111 ], 00:04:40.111 "driver_specific": {} 00:04:40.111 } 00:04:40.111 ]' 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.111 [2024-12-14 12:30:39.665942] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:40.111 [2024-12-14 12:30:39.666010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:40.111 [2024-12-14 12:30:39.666034] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:40.111 [2024-12-14 12:30:39.666061] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:40.111 [2024-12-14 12:30:39.668575] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:40.111 [2024-12-14 12:30:39.668618] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:40.111 Passthru0 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.111 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:40.111 { 00:04:40.111 "name": "Malloc2", 00:04:40.111 "aliases": [ 00:04:40.111 "2865cbb6-f810-4f91-be0d-4ebc3b5c6c93" 00:04:40.111 ], 00:04:40.111 "product_name": "Malloc disk", 00:04:40.111 "block_size": 512, 00:04:40.111 "num_blocks": 16384, 00:04:40.111 "uuid": "2865cbb6-f810-4f91-be0d-4ebc3b5c6c93", 00:04:40.111 "assigned_rate_limits": { 00:04:40.111 "rw_ios_per_sec": 0, 00:04:40.111 "rw_mbytes_per_sec": 0, 00:04:40.111 "r_mbytes_per_sec": 0, 00:04:40.111 "w_mbytes_per_sec": 0 00:04:40.111 }, 00:04:40.111 "claimed": true, 00:04:40.111 "claim_type": "exclusive_write", 00:04:40.111 "zoned": false, 00:04:40.111 "supported_io_types": { 00:04:40.111 "read": true, 00:04:40.111 "write": true, 00:04:40.111 "unmap": true, 00:04:40.111 "flush": true, 00:04:40.111 "reset": true, 00:04:40.111 "nvme_admin": false, 00:04:40.111 "nvme_io": false, 00:04:40.111 "nvme_io_md": false, 00:04:40.111 "write_zeroes": true, 00:04:40.111 "zcopy": true, 00:04:40.111 "get_zone_info": false, 00:04:40.111 "zone_management": false, 00:04:40.111 "zone_append": false, 00:04:40.111 "compare": false, 00:04:40.111 "compare_and_write": false, 00:04:40.111 "abort": true, 00:04:40.111 "seek_hole": false, 00:04:40.111 "seek_data": false, 00:04:40.111 "copy": true, 00:04:40.111 "nvme_iov_md": false 00:04:40.111 }, 00:04:40.111 "memory_domains": [ 00:04:40.111 { 00:04:40.111 "dma_device_id": "system", 00:04:40.111 "dma_device_type": 1 00:04:40.111 }, 00:04:40.111 { 00:04:40.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.111 "dma_device_type": 2 00:04:40.111 } 00:04:40.111 ], 00:04:40.111 "driver_specific": {} 00:04:40.111 }, 00:04:40.111 { 00:04:40.111 "name": "Passthru0", 00:04:40.111 "aliases": [ 00:04:40.111 "39fecbb1-f84a-5f22-a975-4f844ba6b919" 00:04:40.111 ], 00:04:40.111 "product_name": "passthru", 00:04:40.111 "block_size": 512, 00:04:40.111 "num_blocks": 16384, 00:04:40.111 "uuid": "39fecbb1-f84a-5f22-a975-4f844ba6b919", 00:04:40.111 "assigned_rate_limits": { 00:04:40.111 "rw_ios_per_sec": 0, 00:04:40.111 "rw_mbytes_per_sec": 0, 00:04:40.111 "r_mbytes_per_sec": 0, 00:04:40.111 "w_mbytes_per_sec": 0 00:04:40.111 }, 00:04:40.111 "claimed": false, 00:04:40.111 "zoned": false, 00:04:40.111 "supported_io_types": { 00:04:40.111 "read": true, 00:04:40.111 "write": true, 00:04:40.111 "unmap": true, 00:04:40.111 "flush": true, 00:04:40.111 "reset": true, 00:04:40.111 "nvme_admin": false, 00:04:40.111 "nvme_io": false, 00:04:40.111 "nvme_io_md": false, 00:04:40.111 "write_zeroes": true, 00:04:40.111 "zcopy": true, 00:04:40.111 "get_zone_info": false, 00:04:40.111 "zone_management": false, 00:04:40.111 "zone_append": false, 00:04:40.111 "compare": false, 00:04:40.111 "compare_and_write": false, 00:04:40.111 "abort": true, 00:04:40.111 "seek_hole": false, 00:04:40.111 "seek_data": false, 00:04:40.111 "copy": true, 00:04:40.111 "nvme_iov_md": false 00:04:40.111 }, 00:04:40.111 "memory_domains": [ 00:04:40.111 { 00:04:40.111 "dma_device_id": "system", 00:04:40.111 "dma_device_type": 1 00:04:40.111 }, 00:04:40.111 { 00:04:40.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:40.111 "dma_device_type": 2 00:04:40.111 } 00:04:40.111 ], 00:04:40.111 "driver_specific": { 00:04:40.111 "passthru": { 00:04:40.111 "name": "Passthru0", 00:04:40.112 "base_bdev_name": "Malloc2" 00:04:40.112 } 00:04:40.112 } 00:04:40.112 } 00:04:40.112 ]' 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:40.112 ************************************ 00:04:40.112 END TEST rpc_daemon_integrity 00:04:40.112 ************************************ 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:40.112 00:04:40.112 real 0m0.337s 00:04:40.112 user 0m0.188s 00:04:40.112 sys 0m0.043s 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.112 12:30:39 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:40.371 12:30:39 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:40.371 12:30:39 rpc -- rpc/rpc.sh@84 -- # killprocess 58737 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@954 -- # '[' -z 58737 ']' 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@958 -- # kill -0 58737 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@959 -- # uname 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58737 00:04:40.371 killing process with pid 58737 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58737' 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@973 -- # kill 58737 00:04:40.371 12:30:39 rpc -- common/autotest_common.sh@978 -- # wait 58737 00:04:43.036 00:04:43.036 real 0m5.212s 00:04:43.036 user 0m5.728s 00:04:43.036 sys 0m0.875s 00:04:43.036 12:30:42 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.036 ************************************ 00:04:43.036 END TEST rpc 00:04:43.036 ************************************ 00:04:43.036 12:30:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.036 12:30:42 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:43.036 12:30:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.036 12:30:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.036 12:30:42 -- common/autotest_common.sh@10 -- # set +x 00:04:43.036 ************************************ 00:04:43.036 START TEST skip_rpc 00:04:43.036 ************************************ 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:43.036 * Looking for test storage... 00:04:43.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.036 12:30:42 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.036 --rc genhtml_branch_coverage=1 00:04:43.036 --rc genhtml_function_coverage=1 00:04:43.036 --rc genhtml_legend=1 00:04:43.036 --rc geninfo_all_blocks=1 00:04:43.036 --rc geninfo_unexecuted_blocks=1 00:04:43.036 00:04:43.036 ' 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.036 --rc genhtml_branch_coverage=1 00:04:43.036 --rc genhtml_function_coverage=1 00:04:43.036 --rc genhtml_legend=1 00:04:43.036 --rc geninfo_all_blocks=1 00:04:43.036 --rc geninfo_unexecuted_blocks=1 00:04:43.036 00:04:43.036 ' 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.036 --rc genhtml_branch_coverage=1 00:04:43.036 --rc genhtml_function_coverage=1 00:04:43.036 --rc genhtml_legend=1 00:04:43.036 --rc geninfo_all_blocks=1 00:04:43.036 --rc geninfo_unexecuted_blocks=1 00:04:43.036 00:04:43.036 ' 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.036 --rc genhtml_branch_coverage=1 00:04:43.036 --rc genhtml_function_coverage=1 00:04:43.036 --rc genhtml_legend=1 00:04:43.036 --rc geninfo_all_blocks=1 00:04:43.036 --rc geninfo_unexecuted_blocks=1 00:04:43.036 00:04:43.036 ' 00:04:43.036 12:30:42 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.036 12:30:42 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:43.036 12:30:42 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.036 12:30:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.036 ************************************ 00:04:43.036 START TEST skip_rpc 00:04:43.036 ************************************ 00:04:43.036 12:30:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:43.036 12:30:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58966 00:04:43.036 12:30:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:43.036 12:30:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.036 12:30:42 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:43.036 [2024-12-14 12:30:42.675810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:43.036 [2024-12-14 12:30:42.676020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58966 ] 00:04:43.296 [2024-12-14 12:30:42.853547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.296 [2024-12-14 12:30:42.970629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58966 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58966 ']' 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58966 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58966 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58966' 00:04:48.574 killing process with pid 58966 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58966 00:04:48.574 12:30:47 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58966 00:04:50.484 00:04:50.484 real 0m7.417s 00:04:50.484 user 0m6.952s 00:04:50.484 sys 0m0.371s 00:04:50.484 12:30:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.484 12:30:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.484 ************************************ 00:04:50.484 END TEST skip_rpc 00:04:50.484 ************************************ 00:04:50.484 12:30:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.484 12:30:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.484 12:30:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.484 12:30:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.484 ************************************ 00:04:50.484 START TEST skip_rpc_with_json 00:04:50.484 ************************************ 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59070 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59070 00:04:50.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59070 ']' 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.484 12:30:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.484 [2024-12-14 12:30:50.159349] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:50.484 [2024-12-14 12:30:50.159541] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59070 ] 00:04:50.744 [2024-12-14 12:30:50.334956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.744 [2024-12-14 12:30:50.448273] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.684 [2024-12-14 12:30:51.311008] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.684 request: 00:04:51.684 { 00:04:51.684 "trtype": "tcp", 00:04:51.684 "method": "nvmf_get_transports", 00:04:51.684 "req_id": 1 00:04:51.684 } 00:04:51.684 Got JSON-RPC error response 00:04:51.684 response: 00:04:51.684 { 00:04:51.684 "code": -19, 00:04:51.684 "message": "No such device" 00:04:51.684 } 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.684 [2024-12-14 12:30:51.323154] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.684 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.944 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.944 12:30:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.944 { 00:04:51.944 "subsystems": [ 00:04:51.944 { 00:04:51.944 "subsystem": "fsdev", 00:04:51.944 "config": [ 00:04:51.944 { 00:04:51.944 "method": "fsdev_set_opts", 00:04:51.944 "params": { 00:04:51.944 "fsdev_io_pool_size": 65535, 00:04:51.944 "fsdev_io_cache_size": 256 00:04:51.944 } 00:04:51.944 } 00:04:51.944 ] 00:04:51.944 }, 00:04:51.944 { 00:04:51.944 "subsystem": "keyring", 00:04:51.944 "config": [] 00:04:51.944 }, 00:04:51.944 { 00:04:51.944 "subsystem": "iobuf", 00:04:51.944 "config": [ 00:04:51.944 { 00:04:51.944 "method": "iobuf_set_options", 00:04:51.944 "params": { 00:04:51.944 "small_pool_count": 8192, 00:04:51.944 "large_pool_count": 1024, 00:04:51.944 "small_bufsize": 8192, 00:04:51.944 "large_bufsize": 135168, 00:04:51.944 "enable_numa": false 00:04:51.944 } 00:04:51.944 } 00:04:51.944 ] 00:04:51.944 }, 00:04:51.944 { 00:04:51.944 "subsystem": "sock", 00:04:51.944 "config": [ 00:04:51.944 { 00:04:51.944 "method": "sock_set_default_impl", 00:04:51.944 "params": { 00:04:51.944 "impl_name": "posix" 00:04:51.944 } 00:04:51.944 }, 00:04:51.944 { 00:04:51.944 "method": "sock_impl_set_options", 00:04:51.944 "params": { 00:04:51.944 "impl_name": "ssl", 00:04:51.944 "recv_buf_size": 4096, 00:04:51.944 "send_buf_size": 4096, 00:04:51.944 "enable_recv_pipe": true, 00:04:51.944 "enable_quickack": false, 00:04:51.944 "enable_placement_id": 0, 00:04:51.944 "enable_zerocopy_send_server": true, 00:04:51.944 "enable_zerocopy_send_client": false, 00:04:51.944 "zerocopy_threshold": 0, 00:04:51.944 "tls_version": 0, 00:04:51.944 "enable_ktls": false 00:04:51.944 } 00:04:51.944 }, 00:04:51.944 { 00:04:51.944 "method": "sock_impl_set_options", 00:04:51.944 "params": { 00:04:51.944 "impl_name": "posix", 00:04:51.944 "recv_buf_size": 2097152, 00:04:51.944 "send_buf_size": 2097152, 00:04:51.944 "enable_recv_pipe": true, 00:04:51.944 "enable_quickack": false, 00:04:51.944 "enable_placement_id": 0, 00:04:51.944 "enable_zerocopy_send_server": true, 00:04:51.944 "enable_zerocopy_send_client": false, 00:04:51.944 "zerocopy_threshold": 0, 00:04:51.944 "tls_version": 0, 00:04:51.944 "enable_ktls": false 00:04:51.944 } 00:04:51.944 } 00:04:51.944 ] 00:04:51.944 }, 00:04:51.944 { 00:04:51.944 "subsystem": "vmd", 00:04:51.944 "config": [] 00:04:51.944 }, 00:04:51.944 { 00:04:51.944 "subsystem": "accel", 00:04:51.944 "config": [ 00:04:51.944 { 00:04:51.944 "method": "accel_set_options", 00:04:51.944 "params": { 00:04:51.944 "small_cache_size": 128, 00:04:51.944 "large_cache_size": 16, 00:04:51.944 "task_count": 2048, 00:04:51.944 "sequence_count": 2048, 00:04:51.944 "buf_count": 2048 00:04:51.944 } 00:04:51.944 } 00:04:51.944 ] 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "bdev", 00:04:51.945 "config": [ 00:04:51.945 { 00:04:51.945 "method": "bdev_set_options", 00:04:51.945 "params": { 00:04:51.945 "bdev_io_pool_size": 65535, 00:04:51.945 "bdev_io_cache_size": 256, 00:04:51.945 "bdev_auto_examine": true, 00:04:51.945 "iobuf_small_cache_size": 128, 00:04:51.945 "iobuf_large_cache_size": 16 00:04:51.945 } 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "method": "bdev_raid_set_options", 00:04:51.945 "params": { 00:04:51.945 "process_window_size_kb": 1024, 00:04:51.945 "process_max_bandwidth_mb_sec": 0 00:04:51.945 } 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "method": "bdev_iscsi_set_options", 00:04:51.945 "params": { 00:04:51.945 "timeout_sec": 30 00:04:51.945 } 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "method": "bdev_nvme_set_options", 00:04:51.945 "params": { 00:04:51.945 "action_on_timeout": "none", 00:04:51.945 "timeout_us": 0, 00:04:51.945 "timeout_admin_us": 0, 00:04:51.945 "keep_alive_timeout_ms": 10000, 00:04:51.945 "arbitration_burst": 0, 00:04:51.945 "low_priority_weight": 0, 00:04:51.945 "medium_priority_weight": 0, 00:04:51.945 "high_priority_weight": 0, 00:04:51.945 "nvme_adminq_poll_period_us": 10000, 00:04:51.945 "nvme_ioq_poll_period_us": 0, 00:04:51.945 "io_queue_requests": 0, 00:04:51.945 "delay_cmd_submit": true, 00:04:51.945 "transport_retry_count": 4, 00:04:51.945 "bdev_retry_count": 3, 00:04:51.945 "transport_ack_timeout": 0, 00:04:51.945 "ctrlr_loss_timeout_sec": 0, 00:04:51.945 "reconnect_delay_sec": 0, 00:04:51.945 "fast_io_fail_timeout_sec": 0, 00:04:51.945 "disable_auto_failback": false, 00:04:51.945 "generate_uuids": false, 00:04:51.945 "transport_tos": 0, 00:04:51.945 "nvme_error_stat": false, 00:04:51.945 "rdma_srq_size": 0, 00:04:51.945 "io_path_stat": false, 00:04:51.945 "allow_accel_sequence": false, 00:04:51.945 "rdma_max_cq_size": 0, 00:04:51.945 "rdma_cm_event_timeout_ms": 0, 00:04:51.945 "dhchap_digests": [ 00:04:51.945 "sha256", 00:04:51.945 "sha384", 00:04:51.945 "sha512" 00:04:51.945 ], 00:04:51.945 "dhchap_dhgroups": [ 00:04:51.945 "null", 00:04:51.945 "ffdhe2048", 00:04:51.945 "ffdhe3072", 00:04:51.945 "ffdhe4096", 00:04:51.945 "ffdhe6144", 00:04:51.945 "ffdhe8192" 00:04:51.945 ], 00:04:51.945 "rdma_umr_per_io": false 00:04:51.945 } 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "method": "bdev_nvme_set_hotplug", 00:04:51.945 "params": { 00:04:51.945 "period_us": 100000, 00:04:51.945 "enable": false 00:04:51.945 } 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "method": "bdev_wait_for_examine" 00:04:51.945 } 00:04:51.945 ] 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "scsi", 00:04:51.945 "config": null 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "scheduler", 00:04:51.945 "config": [ 00:04:51.945 { 00:04:51.945 "method": "framework_set_scheduler", 00:04:51.945 "params": { 00:04:51.945 "name": "static" 00:04:51.945 } 00:04:51.945 } 00:04:51.945 ] 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "vhost_scsi", 00:04:51.945 "config": [] 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "vhost_blk", 00:04:51.945 "config": [] 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "ublk", 00:04:51.945 "config": [] 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "nbd", 00:04:51.945 "config": [] 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "nvmf", 00:04:51.945 "config": [ 00:04:51.945 { 00:04:51.945 "method": "nvmf_set_config", 00:04:51.945 "params": { 00:04:51.945 "discovery_filter": "match_any", 00:04:51.945 "admin_cmd_passthru": { 00:04:51.945 "identify_ctrlr": false 00:04:51.945 }, 00:04:51.945 "dhchap_digests": [ 00:04:51.945 "sha256", 00:04:51.945 "sha384", 00:04:51.945 "sha512" 00:04:51.945 ], 00:04:51.945 "dhchap_dhgroups": [ 00:04:51.945 "null", 00:04:51.945 "ffdhe2048", 00:04:51.945 "ffdhe3072", 00:04:51.945 "ffdhe4096", 00:04:51.945 "ffdhe6144", 00:04:51.945 "ffdhe8192" 00:04:51.945 ] 00:04:51.945 } 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "method": "nvmf_set_max_subsystems", 00:04:51.945 "params": { 00:04:51.945 "max_subsystems": 1024 00:04:51.945 } 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "method": "nvmf_set_crdt", 00:04:51.945 "params": { 00:04:51.945 "crdt1": 0, 00:04:51.945 "crdt2": 0, 00:04:51.945 "crdt3": 0 00:04:51.945 } 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "method": "nvmf_create_transport", 00:04:51.945 "params": { 00:04:51.945 "trtype": "TCP", 00:04:51.945 "max_queue_depth": 128, 00:04:51.945 "max_io_qpairs_per_ctrlr": 127, 00:04:51.945 "in_capsule_data_size": 4096, 00:04:51.945 "max_io_size": 131072, 00:04:51.945 "io_unit_size": 131072, 00:04:51.945 "max_aq_depth": 128, 00:04:51.945 "num_shared_buffers": 511, 00:04:51.945 "buf_cache_size": 4294967295, 00:04:51.945 "dif_insert_or_strip": false, 00:04:51.945 "zcopy": false, 00:04:51.945 "c2h_success": true, 00:04:51.945 "sock_priority": 0, 00:04:51.945 "abort_timeout_sec": 1, 00:04:51.945 "ack_timeout": 0, 00:04:51.945 "data_wr_pool_size": 0 00:04:51.945 } 00:04:51.945 } 00:04:51.945 ] 00:04:51.945 }, 00:04:51.945 { 00:04:51.945 "subsystem": "iscsi", 00:04:51.945 "config": [ 00:04:51.945 { 00:04:51.945 "method": "iscsi_set_options", 00:04:51.945 "params": { 00:04:51.945 "node_base": "iqn.2016-06.io.spdk", 00:04:51.945 "max_sessions": 128, 00:04:51.945 "max_connections_per_session": 2, 00:04:51.945 "max_queue_depth": 64, 00:04:51.945 "default_time2wait": 2, 00:04:51.945 "default_time2retain": 20, 00:04:51.945 "first_burst_length": 8192, 00:04:51.945 "immediate_data": true, 00:04:51.945 "allow_duplicated_isid": false, 00:04:51.945 "error_recovery_level": 0, 00:04:51.945 "nop_timeout": 60, 00:04:51.945 "nop_in_interval": 30, 00:04:51.945 "disable_chap": false, 00:04:51.945 "require_chap": false, 00:04:51.945 "mutual_chap": false, 00:04:51.945 "chap_group": 0, 00:04:51.945 "max_large_datain_per_connection": 64, 00:04:51.945 "max_r2t_per_connection": 4, 00:04:51.945 "pdu_pool_size": 36864, 00:04:51.945 "immediate_data_pool_size": 16384, 00:04:51.945 "data_out_pool_size": 2048 00:04:51.945 } 00:04:51.945 } 00:04:51.945 ] 00:04:51.945 } 00:04:51.945 ] 00:04:51.945 } 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59070 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59070 ']' 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59070 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59070 00:04:51.945 killing process with pid 59070 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59070' 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59070 00:04:51.945 12:30:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59070 00:04:54.482 12:30:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59126 00:04:54.482 12:30:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:54.482 12:30:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59126 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59126 ']' 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59126 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59126 00:04:59.762 killing process with pid 59126 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59126' 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59126 00:04:59.762 12:30:59 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59126 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.303 00:05:02.303 real 0m11.667s 00:05:02.303 user 0m10.927s 00:05:02.303 sys 0m0.992s 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.303 ************************************ 00:05:02.303 END TEST skip_rpc_with_json 00:05:02.303 ************************************ 00:05:02.303 12:31:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:02.303 12:31:01 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.303 12:31:01 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.303 12:31:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.303 ************************************ 00:05:02.303 START TEST skip_rpc_with_delay 00:05:02.303 ************************************ 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.303 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:02.304 [2024-12-14 12:31:01.897869] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:02.304 ************************************ 00:05:02.304 END TEST skip_rpc_with_delay 00:05:02.304 ************************************ 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.304 00:05:02.304 real 0m0.166s 00:05:02.304 user 0m0.085s 00:05:02.304 sys 0m0.080s 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.304 12:31:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:02.304 12:31:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:02.304 12:31:02 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:02.304 12:31:02 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:02.304 12:31:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.304 12:31:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.304 12:31:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.304 ************************************ 00:05:02.304 START TEST exit_on_failed_rpc_init 00:05:02.304 ************************************ 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59265 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59265 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59265 ']' 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.304 12:31:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:02.564 [2024-12-14 12:31:02.125659] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:02.564 [2024-12-14 12:31:02.125866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59265 ] 00:05:02.564 [2024-12-14 12:31:02.284314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.823 [2024-12-14 12:31:02.422658] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:03.762 12:31:03 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.022 [2024-12-14 12:31:03.537342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:04.022 [2024-12-14 12:31:03.537888] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59283 ] 00:05:04.022 [2024-12-14 12:31:03.713790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.282 [2024-12-14 12:31:03.836481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.282 [2024-12-14 12:31:03.836682] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:04.282 [2024-12-14 12:31:03.836733] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:04.282 [2024-12-14 12:31:03.836758] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59265 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59265 ']' 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59265 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59265 00:05:04.542 killing process with pid 59265 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59265' 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59265 00:05:04.542 12:31:04 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59265 00:05:07.081 00:05:07.081 real 0m4.759s 00:05:07.081 user 0m4.981s 00:05:07.081 sys 0m0.718s 00:05:07.081 12:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.081 ************************************ 00:05:07.081 END TEST exit_on_failed_rpc_init 00:05:07.081 ************************************ 00:05:07.081 12:31:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:07.340 12:31:06 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:07.340 00:05:07.340 real 0m24.492s 00:05:07.340 user 0m23.161s 00:05:07.340 sys 0m2.445s 00:05:07.340 12:31:06 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.340 12:31:06 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.340 ************************************ 00:05:07.340 END TEST skip_rpc 00:05:07.340 ************************************ 00:05:07.340 12:31:06 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:07.340 12:31:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.340 12:31:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.340 12:31:06 -- common/autotest_common.sh@10 -- # set +x 00:05:07.340 ************************************ 00:05:07.340 START TEST rpc_client 00:05:07.340 ************************************ 00:05:07.340 12:31:06 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:07.340 * Looking for test storage... 00:05:07.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:07.340 12:31:07 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.340 12:31:07 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.340 12:31:07 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.600 12:31:07 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.600 12:31:07 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:07.600 12:31:07 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.600 12:31:07 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.600 --rc genhtml_branch_coverage=1 00:05:07.600 --rc genhtml_function_coverage=1 00:05:07.600 --rc genhtml_legend=1 00:05:07.600 --rc geninfo_all_blocks=1 00:05:07.600 --rc geninfo_unexecuted_blocks=1 00:05:07.600 00:05:07.600 ' 00:05:07.600 12:31:07 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.600 --rc genhtml_branch_coverage=1 00:05:07.600 --rc genhtml_function_coverage=1 00:05:07.600 --rc genhtml_legend=1 00:05:07.600 --rc geninfo_all_blocks=1 00:05:07.600 --rc geninfo_unexecuted_blocks=1 00:05:07.600 00:05:07.600 ' 00:05:07.600 12:31:07 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.600 --rc genhtml_branch_coverage=1 00:05:07.600 --rc genhtml_function_coverage=1 00:05:07.600 --rc genhtml_legend=1 00:05:07.600 --rc geninfo_all_blocks=1 00:05:07.600 --rc geninfo_unexecuted_blocks=1 00:05:07.600 00:05:07.600 ' 00:05:07.600 12:31:07 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.600 --rc genhtml_branch_coverage=1 00:05:07.600 --rc genhtml_function_coverage=1 00:05:07.600 --rc genhtml_legend=1 00:05:07.600 --rc geninfo_all_blocks=1 00:05:07.600 --rc geninfo_unexecuted_blocks=1 00:05:07.600 00:05:07.600 ' 00:05:07.600 12:31:07 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:07.600 OK 00:05:07.600 12:31:07 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:07.600 00:05:07.600 real 0m0.287s 00:05:07.600 user 0m0.155s 00:05:07.600 sys 0m0.144s 00:05:07.600 ************************************ 00:05:07.600 END TEST rpc_client 00:05:07.600 ************************************ 00:05:07.600 12:31:07 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.600 12:31:07 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:07.600 12:31:07 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:07.600 12:31:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.600 12:31:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.600 12:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:07.600 ************************************ 00:05:07.600 START TEST json_config 00:05:07.600 ************************************ 00:05:07.600 12:31:07 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:07.860 12:31:07 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.860 12:31:07 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.860 12:31:07 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.860 12:31:07 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.860 12:31:07 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.860 12:31:07 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.860 12:31:07 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.860 12:31:07 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.860 12:31:07 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.860 12:31:07 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.860 12:31:07 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.860 12:31:07 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.860 12:31:07 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.860 12:31:07 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.860 12:31:07 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.860 12:31:07 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:07.861 12:31:07 json_config -- scripts/common.sh@345 -- # : 1 00:05:07.861 12:31:07 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.861 12:31:07 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.861 12:31:07 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:07.861 12:31:07 json_config -- scripts/common.sh@353 -- # local d=1 00:05:07.861 12:31:07 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.861 12:31:07 json_config -- scripts/common.sh@355 -- # echo 1 00:05:07.861 12:31:07 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.861 12:31:07 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:07.861 12:31:07 json_config -- scripts/common.sh@353 -- # local d=2 00:05:07.861 12:31:07 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.861 12:31:07 json_config -- scripts/common.sh@355 -- # echo 2 00:05:07.861 12:31:07 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.861 12:31:07 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.861 12:31:07 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.861 12:31:07 json_config -- scripts/common.sh@368 -- # return 0 00:05:07.861 12:31:07 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.861 12:31:07 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.861 --rc genhtml_branch_coverage=1 00:05:07.861 --rc genhtml_function_coverage=1 00:05:07.861 --rc genhtml_legend=1 00:05:07.861 --rc geninfo_all_blocks=1 00:05:07.861 --rc geninfo_unexecuted_blocks=1 00:05:07.861 00:05:07.861 ' 00:05:07.861 12:31:07 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.861 --rc genhtml_branch_coverage=1 00:05:07.861 --rc genhtml_function_coverage=1 00:05:07.861 --rc genhtml_legend=1 00:05:07.861 --rc geninfo_all_blocks=1 00:05:07.861 --rc geninfo_unexecuted_blocks=1 00:05:07.861 00:05:07.861 ' 00:05:07.861 12:31:07 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.861 --rc genhtml_branch_coverage=1 00:05:07.861 --rc genhtml_function_coverage=1 00:05:07.861 --rc genhtml_legend=1 00:05:07.861 --rc geninfo_all_blocks=1 00:05:07.861 --rc geninfo_unexecuted_blocks=1 00:05:07.861 00:05:07.861 ' 00:05:07.861 12:31:07 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.861 --rc genhtml_branch_coverage=1 00:05:07.861 --rc genhtml_function_coverage=1 00:05:07.861 --rc genhtml_legend=1 00:05:07.861 --rc geninfo_all_blocks=1 00:05:07.861 --rc geninfo_unexecuted_blocks=1 00:05:07.861 00:05:07.861 ' 00:05:07.861 12:31:07 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:acf39263-853c-4270-82f2-9ace538f8911 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=acf39263-853c-4270-82f2-9ace538f8911 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:07.861 12:31:07 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.861 12:31:07 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.861 12:31:07 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.861 12:31:07 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.861 12:31:07 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.861 12:31:07 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.861 12:31:07 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.861 12:31:07 json_config -- paths/export.sh@5 -- # export PATH 00:05:07.861 12:31:07 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@51 -- # : 0 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.861 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.861 12:31:07 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.861 12:31:07 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:07.861 12:31:07 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:07.861 12:31:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:07.861 12:31:07 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:07.861 12:31:07 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:07.861 WARNING: No tests are enabled so not running JSON configuration tests 00:05:07.861 12:31:07 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:07.861 12:31:07 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:07.861 ************************************ 00:05:07.861 END TEST json_config 00:05:07.861 ************************************ 00:05:07.861 00:05:07.861 real 0m0.231s 00:05:07.861 user 0m0.134s 00:05:07.861 sys 0m0.101s 00:05:07.861 12:31:07 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.861 12:31:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:07.861 12:31:07 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:07.861 12:31:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.861 12:31:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.861 12:31:07 -- common/autotest_common.sh@10 -- # set +x 00:05:07.861 ************************************ 00:05:07.861 START TEST json_config_extra_key 00:05:07.861 ************************************ 00:05:07.861 12:31:07 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:08.133 12:31:07 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.133 12:31:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.133 12:31:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.133 12:31:07 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.133 12:31:07 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.134 --rc genhtml_branch_coverage=1 00:05:08.134 --rc genhtml_function_coverage=1 00:05:08.134 --rc genhtml_legend=1 00:05:08.134 --rc geninfo_all_blocks=1 00:05:08.134 --rc geninfo_unexecuted_blocks=1 00:05:08.134 00:05:08.134 ' 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.134 --rc genhtml_branch_coverage=1 00:05:08.134 --rc genhtml_function_coverage=1 00:05:08.134 --rc genhtml_legend=1 00:05:08.134 --rc geninfo_all_blocks=1 00:05:08.134 --rc geninfo_unexecuted_blocks=1 00:05:08.134 00:05:08.134 ' 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.134 --rc genhtml_branch_coverage=1 00:05:08.134 --rc genhtml_function_coverage=1 00:05:08.134 --rc genhtml_legend=1 00:05:08.134 --rc geninfo_all_blocks=1 00:05:08.134 --rc geninfo_unexecuted_blocks=1 00:05:08.134 00:05:08.134 ' 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.134 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.134 --rc genhtml_branch_coverage=1 00:05:08.134 --rc genhtml_function_coverage=1 00:05:08.134 --rc genhtml_legend=1 00:05:08.134 --rc geninfo_all_blocks=1 00:05:08.134 --rc geninfo_unexecuted_blocks=1 00:05:08.134 00:05:08.134 ' 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:acf39263-853c-4270-82f2-9ace538f8911 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=acf39263-853c-4270-82f2-9ace538f8911 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:08.134 12:31:07 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:08.134 12:31:07 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.134 12:31:07 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.134 12:31:07 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.134 12:31:07 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:08.134 12:31:07 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:08.134 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:08.134 12:31:07 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:08.134 INFO: launching applications... 00:05:08.134 12:31:07 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59499 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:08.134 Waiting for target to run... 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59499 /var/tmp/spdk_tgt.sock 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59499 ']' 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:08.134 12:31:07 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:08.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.134 12:31:07 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:08.427 [2024-12-14 12:31:07.895359] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:08.427 [2024-12-14 12:31:07.895583] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59499 ] 00:05:08.702 [2024-12-14 12:31:08.300789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.702 [2024-12-14 12:31:08.424730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.641 00:05:09.641 INFO: shutting down applications... 00:05:09.641 12:31:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.641 12:31:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:09.641 12:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:09.641 12:31:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59499 ]] 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59499 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:09.641 12:31:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.210 12:31:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.210 12:31:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.210 12:31:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:10.210 12:31:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.779 12:31:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.779 12:31:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.779 12:31:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:10.779 12:31:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.039 12:31:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.039 12:31:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.039 12:31:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:11.039 12:31:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.609 12:31:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.609 12:31:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.609 12:31:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:11.609 12:31:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.261 12:31:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.261 12:31:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.261 12:31:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:12.261 12:31:11 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:12.521 12:31:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:12.521 12:31:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:12.521 SPDK target shutdown done 00:05:12.521 12:31:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59499 00:05:12.521 12:31:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:12.521 12:31:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:12.521 12:31:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:12.521 12:31:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:12.521 12:31:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:12.521 Success 00:05:12.521 00:05:12.521 real 0m4.703s 00:05:12.521 user 0m4.357s 00:05:12.521 sys 0m0.628s 00:05:12.521 12:31:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.780 12:31:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.780 ************************************ 00:05:12.780 END TEST json_config_extra_key 00:05:12.780 ************************************ 00:05:12.780 12:31:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.780 12:31:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.780 12:31:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.780 12:31:12 -- common/autotest_common.sh@10 -- # set +x 00:05:12.780 ************************************ 00:05:12.780 START TEST alias_rpc 00:05:12.780 ************************************ 00:05:12.780 12:31:12 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:12.780 * Looking for test storage... 00:05:12.780 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:12.780 12:31:12 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:12.780 12:31:12 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:12.780 12:31:12 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.039 12:31:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:13.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.039 --rc genhtml_branch_coverage=1 00:05:13.039 --rc genhtml_function_coverage=1 00:05:13.039 --rc genhtml_legend=1 00:05:13.039 --rc geninfo_all_blocks=1 00:05:13.039 --rc geninfo_unexecuted_blocks=1 00:05:13.039 00:05:13.039 ' 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:13.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.039 --rc genhtml_branch_coverage=1 00:05:13.039 --rc genhtml_function_coverage=1 00:05:13.039 --rc genhtml_legend=1 00:05:13.039 --rc geninfo_all_blocks=1 00:05:13.039 --rc geninfo_unexecuted_blocks=1 00:05:13.039 00:05:13.039 ' 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:13.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.039 --rc genhtml_branch_coverage=1 00:05:13.039 --rc genhtml_function_coverage=1 00:05:13.039 --rc genhtml_legend=1 00:05:13.039 --rc geninfo_all_blocks=1 00:05:13.039 --rc geninfo_unexecuted_blocks=1 00:05:13.039 00:05:13.039 ' 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:13.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.039 --rc genhtml_branch_coverage=1 00:05:13.039 --rc genhtml_function_coverage=1 00:05:13.039 --rc genhtml_legend=1 00:05:13.039 --rc geninfo_all_blocks=1 00:05:13.039 --rc geninfo_unexecuted_blocks=1 00:05:13.039 00:05:13.039 ' 00:05:13.039 12:31:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.039 12:31:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59610 00:05:13.039 12:31:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:13.039 12:31:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59610 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59610 ']' 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.039 12:31:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.039 [2024-12-14 12:31:12.637916] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:13.039 [2024-12-14 12:31:12.638120] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59610 ] 00:05:13.298 [2024-12-14 12:31:12.813531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.298 [2024-12-14 12:31:12.946802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.237 12:31:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.237 12:31:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.237 12:31:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:14.497 12:31:14 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59610 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59610 ']' 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59610 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59610 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59610' 00:05:14.497 killing process with pid 59610 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@973 -- # kill 59610 00:05:14.497 12:31:14 alias_rpc -- common/autotest_common.sh@978 -- # wait 59610 00:05:17.046 ************************************ 00:05:17.047 END TEST alias_rpc 00:05:17.047 ************************************ 00:05:17.047 00:05:17.047 real 0m4.388s 00:05:17.047 user 0m4.189s 00:05:17.047 sys 0m0.753s 00:05:17.047 12:31:16 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.047 12:31:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.047 12:31:16 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:17.047 12:31:16 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:17.047 12:31:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.047 12:31:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.047 12:31:16 -- common/autotest_common.sh@10 -- # set +x 00:05:17.047 ************************************ 00:05:17.047 START TEST spdkcli_tcp 00:05:17.047 ************************************ 00:05:17.047 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:17.317 * Looking for test storage... 00:05:17.317 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.317 12:31:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.317 --rc genhtml_branch_coverage=1 00:05:17.317 --rc genhtml_function_coverage=1 00:05:17.317 --rc genhtml_legend=1 00:05:17.317 --rc geninfo_all_blocks=1 00:05:17.317 --rc geninfo_unexecuted_blocks=1 00:05:17.317 00:05:17.317 ' 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.317 --rc genhtml_branch_coverage=1 00:05:17.317 --rc genhtml_function_coverage=1 00:05:17.317 --rc genhtml_legend=1 00:05:17.317 --rc geninfo_all_blocks=1 00:05:17.317 --rc geninfo_unexecuted_blocks=1 00:05:17.317 00:05:17.317 ' 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.317 --rc genhtml_branch_coverage=1 00:05:17.317 --rc genhtml_function_coverage=1 00:05:17.317 --rc genhtml_legend=1 00:05:17.317 --rc geninfo_all_blocks=1 00:05:17.317 --rc geninfo_unexecuted_blocks=1 00:05:17.317 00:05:17.317 ' 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.317 --rc genhtml_branch_coverage=1 00:05:17.317 --rc genhtml_function_coverage=1 00:05:17.317 --rc genhtml_legend=1 00:05:17.317 --rc geninfo_all_blocks=1 00:05:17.317 --rc geninfo_unexecuted_blocks=1 00:05:17.317 00:05:17.317 ' 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59723 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:17.317 12:31:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59723 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59723 ']' 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.317 12:31:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:17.577 [2024-12-14 12:31:17.063565] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:17.577 [2024-12-14 12:31:17.063771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59723 ] 00:05:17.577 [2024-12-14 12:31:17.237895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:17.836 [2024-12-14 12:31:17.345802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.836 [2024-12-14 12:31:17.345840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.772 12:31:18 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.772 12:31:18 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:18.772 12:31:18 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59740 00:05:18.772 12:31:18 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:18.772 12:31:18 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:18.772 [ 00:05:18.772 "bdev_malloc_delete", 00:05:18.772 "bdev_malloc_create", 00:05:18.772 "bdev_null_resize", 00:05:18.772 "bdev_null_delete", 00:05:18.772 "bdev_null_create", 00:05:18.772 "bdev_nvme_cuse_unregister", 00:05:18.772 "bdev_nvme_cuse_register", 00:05:18.772 "bdev_opal_new_user", 00:05:18.772 "bdev_opal_set_lock_state", 00:05:18.772 "bdev_opal_delete", 00:05:18.772 "bdev_opal_get_info", 00:05:18.772 "bdev_opal_create", 00:05:18.772 "bdev_nvme_opal_revert", 00:05:18.772 "bdev_nvme_opal_init", 00:05:18.772 "bdev_nvme_send_cmd", 00:05:18.772 "bdev_nvme_set_keys", 00:05:18.772 "bdev_nvme_get_path_iostat", 00:05:18.772 "bdev_nvme_get_mdns_discovery_info", 00:05:18.772 "bdev_nvme_stop_mdns_discovery", 00:05:18.772 "bdev_nvme_start_mdns_discovery", 00:05:18.772 "bdev_nvme_set_multipath_policy", 00:05:18.772 "bdev_nvme_set_preferred_path", 00:05:18.772 "bdev_nvme_get_io_paths", 00:05:18.772 "bdev_nvme_remove_error_injection", 00:05:18.772 "bdev_nvme_add_error_injection", 00:05:18.772 "bdev_nvme_get_discovery_info", 00:05:18.772 "bdev_nvme_stop_discovery", 00:05:18.772 "bdev_nvme_start_discovery", 00:05:18.772 "bdev_nvme_get_controller_health_info", 00:05:18.772 "bdev_nvme_disable_controller", 00:05:18.772 "bdev_nvme_enable_controller", 00:05:18.772 "bdev_nvme_reset_controller", 00:05:18.772 "bdev_nvme_get_transport_statistics", 00:05:18.772 "bdev_nvme_apply_firmware", 00:05:18.772 "bdev_nvme_detach_controller", 00:05:18.772 "bdev_nvme_get_controllers", 00:05:18.772 "bdev_nvme_attach_controller", 00:05:18.772 "bdev_nvme_set_hotplug", 00:05:18.772 "bdev_nvme_set_options", 00:05:18.772 "bdev_passthru_delete", 00:05:18.772 "bdev_passthru_create", 00:05:18.772 "bdev_lvol_set_parent_bdev", 00:05:18.772 "bdev_lvol_set_parent", 00:05:18.772 "bdev_lvol_check_shallow_copy", 00:05:18.772 "bdev_lvol_start_shallow_copy", 00:05:18.772 "bdev_lvol_grow_lvstore", 00:05:18.772 "bdev_lvol_get_lvols", 00:05:18.772 "bdev_lvol_get_lvstores", 00:05:18.772 "bdev_lvol_delete", 00:05:18.772 "bdev_lvol_set_read_only", 00:05:18.772 "bdev_lvol_resize", 00:05:18.772 "bdev_lvol_decouple_parent", 00:05:18.772 "bdev_lvol_inflate", 00:05:18.772 "bdev_lvol_rename", 00:05:18.772 "bdev_lvol_clone_bdev", 00:05:18.772 "bdev_lvol_clone", 00:05:18.772 "bdev_lvol_snapshot", 00:05:18.772 "bdev_lvol_create", 00:05:18.772 "bdev_lvol_delete_lvstore", 00:05:18.772 "bdev_lvol_rename_lvstore", 00:05:18.772 "bdev_lvol_create_lvstore", 00:05:18.772 "bdev_raid_set_options", 00:05:18.772 "bdev_raid_remove_base_bdev", 00:05:18.772 "bdev_raid_add_base_bdev", 00:05:18.772 "bdev_raid_delete", 00:05:18.772 "bdev_raid_create", 00:05:18.772 "bdev_raid_get_bdevs", 00:05:18.772 "bdev_error_inject_error", 00:05:18.772 "bdev_error_delete", 00:05:18.772 "bdev_error_create", 00:05:18.772 "bdev_split_delete", 00:05:18.772 "bdev_split_create", 00:05:18.772 "bdev_delay_delete", 00:05:18.772 "bdev_delay_create", 00:05:18.772 "bdev_delay_update_latency", 00:05:18.772 "bdev_zone_block_delete", 00:05:18.772 "bdev_zone_block_create", 00:05:18.772 "blobfs_create", 00:05:18.772 "blobfs_detect", 00:05:18.772 "blobfs_set_cache_size", 00:05:18.772 "bdev_aio_delete", 00:05:18.772 "bdev_aio_rescan", 00:05:18.772 "bdev_aio_create", 00:05:18.772 "bdev_ftl_set_property", 00:05:18.772 "bdev_ftl_get_properties", 00:05:18.773 "bdev_ftl_get_stats", 00:05:18.773 "bdev_ftl_unmap", 00:05:18.773 "bdev_ftl_unload", 00:05:18.773 "bdev_ftl_delete", 00:05:18.773 "bdev_ftl_load", 00:05:18.773 "bdev_ftl_create", 00:05:18.773 "bdev_virtio_attach_controller", 00:05:18.773 "bdev_virtio_scsi_get_devices", 00:05:18.773 "bdev_virtio_detach_controller", 00:05:18.773 "bdev_virtio_blk_set_hotplug", 00:05:18.773 "bdev_iscsi_delete", 00:05:18.773 "bdev_iscsi_create", 00:05:18.773 "bdev_iscsi_set_options", 00:05:18.773 "accel_error_inject_error", 00:05:18.773 "ioat_scan_accel_module", 00:05:18.773 "dsa_scan_accel_module", 00:05:18.773 "iaa_scan_accel_module", 00:05:18.773 "keyring_file_remove_key", 00:05:18.773 "keyring_file_add_key", 00:05:18.773 "keyring_linux_set_options", 00:05:18.773 "fsdev_aio_delete", 00:05:18.773 "fsdev_aio_create", 00:05:18.773 "iscsi_get_histogram", 00:05:18.773 "iscsi_enable_histogram", 00:05:18.773 "iscsi_set_options", 00:05:18.773 "iscsi_get_auth_groups", 00:05:18.773 "iscsi_auth_group_remove_secret", 00:05:18.773 "iscsi_auth_group_add_secret", 00:05:18.773 "iscsi_delete_auth_group", 00:05:18.773 "iscsi_create_auth_group", 00:05:18.773 "iscsi_set_discovery_auth", 00:05:18.773 "iscsi_get_options", 00:05:18.773 "iscsi_target_node_request_logout", 00:05:18.773 "iscsi_target_node_set_redirect", 00:05:18.773 "iscsi_target_node_set_auth", 00:05:18.773 "iscsi_target_node_add_lun", 00:05:18.773 "iscsi_get_stats", 00:05:18.773 "iscsi_get_connections", 00:05:18.773 "iscsi_portal_group_set_auth", 00:05:18.773 "iscsi_start_portal_group", 00:05:18.773 "iscsi_delete_portal_group", 00:05:18.773 "iscsi_create_portal_group", 00:05:18.773 "iscsi_get_portal_groups", 00:05:18.773 "iscsi_delete_target_node", 00:05:18.773 "iscsi_target_node_remove_pg_ig_maps", 00:05:18.773 "iscsi_target_node_add_pg_ig_maps", 00:05:18.773 "iscsi_create_target_node", 00:05:18.773 "iscsi_get_target_nodes", 00:05:18.773 "iscsi_delete_initiator_group", 00:05:18.773 "iscsi_initiator_group_remove_initiators", 00:05:18.773 "iscsi_initiator_group_add_initiators", 00:05:18.773 "iscsi_create_initiator_group", 00:05:18.773 "iscsi_get_initiator_groups", 00:05:18.773 "nvmf_set_crdt", 00:05:18.773 "nvmf_set_config", 00:05:18.773 "nvmf_set_max_subsystems", 00:05:18.773 "nvmf_stop_mdns_prr", 00:05:18.773 "nvmf_publish_mdns_prr", 00:05:18.773 "nvmf_subsystem_get_listeners", 00:05:18.773 "nvmf_subsystem_get_qpairs", 00:05:18.773 "nvmf_subsystem_get_controllers", 00:05:18.773 "nvmf_get_stats", 00:05:18.773 "nvmf_get_transports", 00:05:18.773 "nvmf_create_transport", 00:05:18.773 "nvmf_get_targets", 00:05:18.773 "nvmf_delete_target", 00:05:18.773 "nvmf_create_target", 00:05:18.773 "nvmf_subsystem_allow_any_host", 00:05:18.773 "nvmf_subsystem_set_keys", 00:05:18.773 "nvmf_subsystem_remove_host", 00:05:18.773 "nvmf_subsystem_add_host", 00:05:18.773 "nvmf_ns_remove_host", 00:05:18.773 "nvmf_ns_add_host", 00:05:18.773 "nvmf_subsystem_remove_ns", 00:05:18.773 "nvmf_subsystem_set_ns_ana_group", 00:05:18.773 "nvmf_subsystem_add_ns", 00:05:18.773 "nvmf_subsystem_listener_set_ana_state", 00:05:18.773 "nvmf_discovery_get_referrals", 00:05:18.773 "nvmf_discovery_remove_referral", 00:05:18.773 "nvmf_discovery_add_referral", 00:05:18.773 "nvmf_subsystem_remove_listener", 00:05:18.773 "nvmf_subsystem_add_listener", 00:05:18.773 "nvmf_delete_subsystem", 00:05:18.773 "nvmf_create_subsystem", 00:05:18.773 "nvmf_get_subsystems", 00:05:18.773 "env_dpdk_get_mem_stats", 00:05:18.773 "nbd_get_disks", 00:05:18.773 "nbd_stop_disk", 00:05:18.773 "nbd_start_disk", 00:05:18.773 "ublk_recover_disk", 00:05:18.773 "ublk_get_disks", 00:05:18.773 "ublk_stop_disk", 00:05:18.773 "ublk_start_disk", 00:05:18.773 "ublk_destroy_target", 00:05:18.773 "ublk_create_target", 00:05:18.773 "virtio_blk_create_transport", 00:05:18.773 "virtio_blk_get_transports", 00:05:18.773 "vhost_controller_set_coalescing", 00:05:18.773 "vhost_get_controllers", 00:05:18.773 "vhost_delete_controller", 00:05:18.773 "vhost_create_blk_controller", 00:05:18.773 "vhost_scsi_controller_remove_target", 00:05:18.773 "vhost_scsi_controller_add_target", 00:05:18.773 "vhost_start_scsi_controller", 00:05:18.773 "vhost_create_scsi_controller", 00:05:18.773 "thread_set_cpumask", 00:05:18.773 "scheduler_set_options", 00:05:18.773 "framework_get_governor", 00:05:18.773 "framework_get_scheduler", 00:05:18.773 "framework_set_scheduler", 00:05:18.773 "framework_get_reactors", 00:05:18.773 "thread_get_io_channels", 00:05:18.773 "thread_get_pollers", 00:05:18.773 "thread_get_stats", 00:05:18.773 "framework_monitor_context_switch", 00:05:18.773 "spdk_kill_instance", 00:05:18.773 "log_enable_timestamps", 00:05:18.773 "log_get_flags", 00:05:18.773 "log_clear_flag", 00:05:18.773 "log_set_flag", 00:05:18.773 "log_get_level", 00:05:18.773 "log_set_level", 00:05:18.773 "log_get_print_level", 00:05:18.773 "log_set_print_level", 00:05:18.773 "framework_enable_cpumask_locks", 00:05:18.773 "framework_disable_cpumask_locks", 00:05:18.773 "framework_wait_init", 00:05:18.773 "framework_start_init", 00:05:18.773 "scsi_get_devices", 00:05:18.773 "bdev_get_histogram", 00:05:18.773 "bdev_enable_histogram", 00:05:18.773 "bdev_set_qos_limit", 00:05:18.773 "bdev_set_qd_sampling_period", 00:05:18.773 "bdev_get_bdevs", 00:05:18.773 "bdev_reset_iostat", 00:05:18.773 "bdev_get_iostat", 00:05:18.773 "bdev_examine", 00:05:18.773 "bdev_wait_for_examine", 00:05:18.773 "bdev_set_options", 00:05:18.773 "accel_get_stats", 00:05:18.773 "accel_set_options", 00:05:18.773 "accel_set_driver", 00:05:18.773 "accel_crypto_key_destroy", 00:05:18.773 "accel_crypto_keys_get", 00:05:18.773 "accel_crypto_key_create", 00:05:18.773 "accel_assign_opc", 00:05:18.773 "accel_get_module_info", 00:05:18.773 "accel_get_opc_assignments", 00:05:18.773 "vmd_rescan", 00:05:18.773 "vmd_remove_device", 00:05:18.773 "vmd_enable", 00:05:18.773 "sock_get_default_impl", 00:05:18.773 "sock_set_default_impl", 00:05:18.773 "sock_impl_set_options", 00:05:18.773 "sock_impl_get_options", 00:05:18.773 "iobuf_get_stats", 00:05:18.773 "iobuf_set_options", 00:05:18.773 "keyring_get_keys", 00:05:18.773 "framework_get_pci_devices", 00:05:18.773 "framework_get_config", 00:05:18.773 "framework_get_subsystems", 00:05:18.773 "fsdev_set_opts", 00:05:18.773 "fsdev_get_opts", 00:05:18.773 "trace_get_info", 00:05:18.773 "trace_get_tpoint_group_mask", 00:05:18.773 "trace_disable_tpoint_group", 00:05:18.773 "trace_enable_tpoint_group", 00:05:18.773 "trace_clear_tpoint_mask", 00:05:18.773 "trace_set_tpoint_mask", 00:05:18.773 "notify_get_notifications", 00:05:18.773 "notify_get_types", 00:05:18.773 "spdk_get_version", 00:05:18.773 "rpc_get_methods" 00:05:18.773 ] 00:05:18.773 12:31:18 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:18.773 12:31:18 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:18.773 12:31:18 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59723 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59723 ']' 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59723 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59723 00:05:18.773 killing process with pid 59723 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59723' 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59723 00:05:18.773 12:31:18 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59723 00:05:22.058 ************************************ 00:05:22.058 END TEST spdkcli_tcp 00:05:22.058 ************************************ 00:05:22.058 00:05:22.058 real 0m4.290s 00:05:22.058 user 0m7.682s 00:05:22.058 sys 0m0.602s 00:05:22.058 12:31:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.058 12:31:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:22.058 12:31:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.058 12:31:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.058 12:31:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.058 12:31:21 -- common/autotest_common.sh@10 -- # set +x 00:05:22.058 ************************************ 00:05:22.058 START TEST dpdk_mem_utility 00:05:22.058 ************************************ 00:05:22.058 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:22.058 * Looking for test storage... 00:05:22.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:22.058 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:22.058 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:22.058 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:22.058 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.058 12:31:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.059 12:31:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:22.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.059 --rc genhtml_branch_coverage=1 00:05:22.059 --rc genhtml_function_coverage=1 00:05:22.059 --rc genhtml_legend=1 00:05:22.059 --rc geninfo_all_blocks=1 00:05:22.059 --rc geninfo_unexecuted_blocks=1 00:05:22.059 00:05:22.059 ' 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:22.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.059 --rc genhtml_branch_coverage=1 00:05:22.059 --rc genhtml_function_coverage=1 00:05:22.059 --rc genhtml_legend=1 00:05:22.059 --rc geninfo_all_blocks=1 00:05:22.059 --rc geninfo_unexecuted_blocks=1 00:05:22.059 00:05:22.059 ' 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:22.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.059 --rc genhtml_branch_coverage=1 00:05:22.059 --rc genhtml_function_coverage=1 00:05:22.059 --rc genhtml_legend=1 00:05:22.059 --rc geninfo_all_blocks=1 00:05:22.059 --rc geninfo_unexecuted_blocks=1 00:05:22.059 00:05:22.059 ' 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:22.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.059 --rc genhtml_branch_coverage=1 00:05:22.059 --rc genhtml_function_coverage=1 00:05:22.059 --rc genhtml_legend=1 00:05:22.059 --rc geninfo_all_blocks=1 00:05:22.059 --rc geninfo_unexecuted_blocks=1 00:05:22.059 00:05:22.059 ' 00:05:22.059 12:31:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:22.059 12:31:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59845 00:05:22.059 12:31:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.059 12:31:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59845 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59845 ']' 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:22.059 12:31:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.059 [2024-12-14 12:31:21.422200] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:22.059 [2024-12-14 12:31:21.422413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59845 ] 00:05:22.059 [2024-12-14 12:31:21.576031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.059 [2024-12-14 12:31:21.720409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.997 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.997 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:22.997 12:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:22.997 12:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:22.997 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.997 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:22.997 { 00:05:22.997 "filename": "/tmp/spdk_mem_dump.txt" 00:05:22.997 } 00:05:22.997 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.997 12:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:23.259 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:23.259 1 heaps totaling size 824.000000 MiB 00:05:23.259 size: 824.000000 MiB heap id: 0 00:05:23.259 end heaps---------- 00:05:23.259 9 mempools totaling size 603.782043 MiB 00:05:23.259 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:23.259 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:23.259 size: 100.555481 MiB name: bdev_io_59845 00:05:23.259 size: 50.003479 MiB name: msgpool_59845 00:05:23.259 size: 36.509338 MiB name: fsdev_io_59845 00:05:23.259 size: 21.763794 MiB name: PDU_Pool 00:05:23.259 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:23.259 size: 4.133484 MiB name: evtpool_59845 00:05:23.259 size: 0.026123 MiB name: Session_Pool 00:05:23.259 end mempools------- 00:05:23.259 6 memzones totaling size 4.142822 MiB 00:05:23.259 size: 1.000366 MiB name: RG_ring_0_59845 00:05:23.259 size: 1.000366 MiB name: RG_ring_1_59845 00:05:23.259 size: 1.000366 MiB name: RG_ring_4_59845 00:05:23.259 size: 1.000366 MiB name: RG_ring_5_59845 00:05:23.259 size: 0.125366 MiB name: RG_ring_2_59845 00:05:23.259 size: 0.015991 MiB name: RG_ring_3_59845 00:05:23.259 end memzones------- 00:05:23.259 12:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:23.259 heap id: 0 total size: 824.000000 MiB number of busy elements: 313 number of free elements: 18 00:05:23.259 list of free elements. size: 16.781860 MiB 00:05:23.259 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:23.259 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:23.259 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:23.259 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:23.259 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:23.259 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:23.259 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:23.259 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:23.259 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:23.259 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:23.259 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:23.259 element at address: 0x20001b400000 with size: 0.563171 MiB 00:05:23.259 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:23.259 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:23.259 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:23.259 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:23.259 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:23.259 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:23.259 list of standard malloc elements. size: 199.287231 MiB 00:05:23.259 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:23.259 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:23.259 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:23.259 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:23.259 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:23.259 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:23.259 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:23.259 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:23.259 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:23.259 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:23.259 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:23.259 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:23.259 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:23.260 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:23.260 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:23.260 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:23.261 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:23.261 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:23.261 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:23.261 list of memzone associated elements. size: 607.930908 MiB 00:05:23.261 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:23.261 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:23.261 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:23.261 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:23.261 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:23.261 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59845_0 00:05:23.261 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:23.261 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59845_0 00:05:23.261 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:23.261 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59845_0 00:05:23.261 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:23.261 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:23.261 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:23.261 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:23.261 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:23.261 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59845_0 00:05:23.261 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:23.261 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59845 00:05:23.261 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:23.261 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59845 00:05:23.261 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:23.261 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:23.261 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:23.261 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:23.261 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:23.261 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:23.261 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:23.261 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:23.261 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:23.261 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59845 00:05:23.261 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:23.261 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59845 00:05:23.261 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:23.261 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59845 00:05:23.261 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:23.261 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59845 00:05:23.261 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:23.261 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59845 00:05:23.261 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:23.261 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59845 00:05:23.261 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:23.261 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:23.261 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:23.261 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:23.262 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:23.262 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:23.262 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:23.262 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59845 00:05:23.262 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:23.262 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59845 00:05:23.262 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:23.262 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:23.262 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:23.262 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:23.262 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:23.262 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59845 00:05:23.262 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:23.262 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:23.262 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:23.262 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59845 00:05:23.262 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:23.262 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59845 00:05:23.262 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:23.262 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59845 00:05:23.262 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:23.262 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:23.262 12:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:23.262 12:31:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59845 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59845 ']' 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59845 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59845 00:05:23.262 killing process with pid 59845 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59845' 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59845 00:05:23.262 12:31:22 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59845 00:05:25.801 ************************************ 00:05:25.801 END TEST dpdk_mem_utility 00:05:25.801 ************************************ 00:05:25.801 00:05:25.801 real 0m4.119s 00:05:25.801 user 0m3.967s 00:05:25.801 sys 0m0.633s 00:05:25.801 12:31:25 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.801 12:31:25 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:25.801 12:31:25 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.801 12:31:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.801 12:31:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.801 12:31:25 -- common/autotest_common.sh@10 -- # set +x 00:05:25.801 ************************************ 00:05:25.801 START TEST event 00:05:25.801 ************************************ 00:05:25.801 12:31:25 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:25.801 * Looking for test storage... 00:05:25.801 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.801 12:31:25 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:25.801 12:31:25 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:25.801 12:31:25 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:25.801 12:31:25 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:25.801 12:31:25 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.802 12:31:25 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.802 12:31:25 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.802 12:31:25 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.802 12:31:25 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.802 12:31:25 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.802 12:31:25 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.802 12:31:25 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.802 12:31:25 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.802 12:31:25 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.802 12:31:25 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.802 12:31:25 event -- scripts/common.sh@344 -- # case "$op" in 00:05:25.802 12:31:25 event -- scripts/common.sh@345 -- # : 1 00:05:25.802 12:31:25 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.802 12:31:25 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.802 12:31:25 event -- scripts/common.sh@365 -- # decimal 1 00:05:25.802 12:31:25 event -- scripts/common.sh@353 -- # local d=1 00:05:25.802 12:31:25 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.802 12:31:25 event -- scripts/common.sh@355 -- # echo 1 00:05:25.802 12:31:25 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.802 12:31:25 event -- scripts/common.sh@366 -- # decimal 2 00:05:25.802 12:31:25 event -- scripts/common.sh@353 -- # local d=2 00:05:25.802 12:31:25 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.802 12:31:25 event -- scripts/common.sh@355 -- # echo 2 00:05:25.802 12:31:25 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.802 12:31:25 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.802 12:31:25 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.802 12:31:25 event -- scripts/common.sh@368 -- # return 0 00:05:25.802 12:31:25 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.802 12:31:25 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:25.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.802 --rc genhtml_branch_coverage=1 00:05:25.802 --rc genhtml_function_coverage=1 00:05:25.802 --rc genhtml_legend=1 00:05:25.802 --rc geninfo_all_blocks=1 00:05:25.802 --rc geninfo_unexecuted_blocks=1 00:05:25.802 00:05:25.802 ' 00:05:25.802 12:31:25 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:25.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.802 --rc genhtml_branch_coverage=1 00:05:25.802 --rc genhtml_function_coverage=1 00:05:25.802 --rc genhtml_legend=1 00:05:25.802 --rc geninfo_all_blocks=1 00:05:25.802 --rc geninfo_unexecuted_blocks=1 00:05:25.802 00:05:25.802 ' 00:05:25.802 12:31:25 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:25.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.802 --rc genhtml_branch_coverage=1 00:05:25.802 --rc genhtml_function_coverage=1 00:05:25.802 --rc genhtml_legend=1 00:05:25.802 --rc geninfo_all_blocks=1 00:05:25.802 --rc geninfo_unexecuted_blocks=1 00:05:25.802 00:05:25.802 ' 00:05:25.802 12:31:25 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:25.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.802 --rc genhtml_branch_coverage=1 00:05:25.802 --rc genhtml_function_coverage=1 00:05:25.802 --rc genhtml_legend=1 00:05:25.802 --rc geninfo_all_blocks=1 00:05:25.802 --rc geninfo_unexecuted_blocks=1 00:05:25.802 00:05:25.802 ' 00:05:25.802 12:31:25 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:25.802 12:31:25 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:25.802 12:31:25 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:25.802 12:31:25 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:25.802 12:31:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.802 12:31:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.062 ************************************ 00:05:26.062 START TEST event_perf 00:05:26.062 ************************************ 00:05:26.062 12:31:25 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:26.062 Running I/O for 1 seconds...[2024-12-14 12:31:25.588863] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:26.062 [2024-12-14 12:31:25.588981] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59953 ] 00:05:26.062 [2024-12-14 12:31:25.763482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:26.321 Running I/O for 1 seconds...[2024-12-14 12:31:25.885428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.321 [2024-12-14 12:31:25.885521] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.321 [2024-12-14 12:31:25.885648] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.321 [2024-12-14 12:31:25.885684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.709 00:05:27.709 lcore 0: 207496 00:05:27.709 lcore 1: 207495 00:05:27.709 lcore 2: 207496 00:05:27.709 lcore 3: 207496 00:05:27.709 done. 00:05:27.710 00:05:27.710 real 0m1.587s 00:05:27.710 user 0m4.352s 00:05:27.710 sys 0m0.113s 00:05:27.710 12:31:27 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:27.710 12:31:27 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:27.710 ************************************ 00:05:27.710 END TEST event_perf 00:05:27.710 ************************************ 00:05:27.710 12:31:27 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.710 12:31:27 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:27.710 12:31:27 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:27.710 12:31:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.710 ************************************ 00:05:27.710 START TEST event_reactor 00:05:27.710 ************************************ 00:05:27.710 12:31:27 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:27.710 [2024-12-14 12:31:27.238494] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:27.710 [2024-12-14 12:31:27.238651] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59992 ] 00:05:27.710 [2024-12-14 12:31:27.411722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.986 [2024-12-14 12:31:27.523666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.365 test_start 00:05:29.365 oneshot 00:05:29.365 tick 100 00:05:29.365 tick 100 00:05:29.365 tick 250 00:05:29.365 tick 100 00:05:29.365 tick 100 00:05:29.365 tick 100 00:05:29.365 tick 250 00:05:29.365 tick 500 00:05:29.365 tick 100 00:05:29.365 tick 100 00:05:29.365 tick 250 00:05:29.365 tick 100 00:05:29.365 tick 100 00:05:29.365 test_end 00:05:29.365 00:05:29.365 real 0m1.550s 00:05:29.365 user 0m1.347s 00:05:29.365 sys 0m0.095s 00:05:29.365 12:31:28 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.365 12:31:28 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:29.365 ************************************ 00:05:29.365 END TEST event_reactor 00:05:29.365 ************************************ 00:05:29.365 12:31:28 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.365 12:31:28 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:29.365 12:31:28 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.365 12:31:28 event -- common/autotest_common.sh@10 -- # set +x 00:05:29.365 ************************************ 00:05:29.365 START TEST event_reactor_perf 00:05:29.365 ************************************ 00:05:29.365 12:31:28 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:29.365 [2024-12-14 12:31:28.854473] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:29.365 [2024-12-14 12:31:28.854622] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60029 ] 00:05:29.365 [2024-12-14 12:31:29.028510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.624 [2024-12-14 12:31:29.140839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.005 test_start 00:05:31.005 test_end 00:05:31.005 Performance: 396680 events per second 00:05:31.005 00:05:31.005 real 0m1.556s 00:05:31.005 user 0m1.354s 00:05:31.005 sys 0m0.095s 00:05:31.005 12:31:30 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.005 12:31:30 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:31.005 ************************************ 00:05:31.005 END TEST event_reactor_perf 00:05:31.005 ************************************ 00:05:31.005 12:31:30 event -- event/event.sh@49 -- # uname -s 00:05:31.005 12:31:30 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:31.005 12:31:30 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.005 12:31:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.005 12:31:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.005 12:31:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.005 ************************************ 00:05:31.005 START TEST event_scheduler 00:05:31.005 ************************************ 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:31.005 * Looking for test storage... 00:05:31.005 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.005 12:31:30 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.005 --rc genhtml_branch_coverage=1 00:05:31.005 --rc genhtml_function_coverage=1 00:05:31.005 --rc genhtml_legend=1 00:05:31.005 --rc geninfo_all_blocks=1 00:05:31.005 --rc geninfo_unexecuted_blocks=1 00:05:31.005 00:05:31.005 ' 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.005 --rc genhtml_branch_coverage=1 00:05:31.005 --rc genhtml_function_coverage=1 00:05:31.005 --rc genhtml_legend=1 00:05:31.005 --rc geninfo_all_blocks=1 00:05:31.005 --rc geninfo_unexecuted_blocks=1 00:05:31.005 00:05:31.005 ' 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.005 --rc genhtml_branch_coverage=1 00:05:31.005 --rc genhtml_function_coverage=1 00:05:31.005 --rc genhtml_legend=1 00:05:31.005 --rc geninfo_all_blocks=1 00:05:31.005 --rc geninfo_unexecuted_blocks=1 00:05:31.005 00:05:31.005 ' 00:05:31.005 12:31:30 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.005 --rc genhtml_branch_coverage=1 00:05:31.005 --rc genhtml_function_coverage=1 00:05:31.005 --rc genhtml_legend=1 00:05:31.005 --rc geninfo_all_blocks=1 00:05:31.005 --rc geninfo_unexecuted_blocks=1 00:05:31.005 00:05:31.005 ' 00:05:31.005 12:31:30 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:31.005 12:31:30 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60105 00:05:31.006 12:31:30 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:31.006 12:31:30 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.006 12:31:30 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60105 00:05:31.006 12:31:30 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60105 ']' 00:05:31.006 12:31:30 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.006 12:31:30 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.006 12:31:30 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.006 12:31:30 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.006 12:31:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:31.265 [2024-12-14 12:31:30.744672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:31.265 [2024-12-14 12:31:30.744821] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60105 ] 00:05:31.265 [2024-12-14 12:31:30.905394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.523 [2024-12-14 12:31:31.023232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.523 [2024-12-14 12:31:31.023357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.523 [2024-12-14 12:31:31.023487] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.523 [2024-12-14 12:31:31.023525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.089 12:31:31 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.089 12:31:31 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:32.089 12:31:31 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:32.089 12:31:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.089 12:31:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.089 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.089 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.090 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.090 POWER: Cannot set governor of lcore 0 to performance 00:05:32.090 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.090 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.090 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:32.090 POWER: Cannot set governor of lcore 0 to userspace 00:05:32.090 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:32.090 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:32.090 POWER: Unable to set Power Management Environment for lcore 0 00:05:32.090 [2024-12-14 12:31:31.608137] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:32.090 [2024-12-14 12:31:31.608191] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:32.090 [2024-12-14 12:31:31.608227] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:32.090 [2024-12-14 12:31:31.608273] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:32.090 [2024-12-14 12:31:31.608303] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:32.090 [2024-12-14 12:31:31.608336] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:32.090 12:31:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.090 12:31:31 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:32.090 12:31:31 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.090 12:31:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 [2024-12-14 12:31:31.934525] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:32.349 12:31:31 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:31 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:32.349 12:31:31 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.349 12:31:31 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.349 12:31:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 ************************************ 00:05:32.349 START TEST scheduler_create_thread 00:05:32.349 ************************************ 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 2 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 3 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 4 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 5 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 6 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 7 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 8 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 9 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 10 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:32.349 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.290 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.290 12:31:32 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:33.290 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.290 12:31:32 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.669 12:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:34.669 12:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:34.669 12:31:34 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:34.669 12:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:34.669 12:31:34 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.608 12:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.608 00:05:35.608 real 0m3.373s 00:05:35.608 user 0m0.025s 00:05:35.608 sys 0m0.007s 00:05:35.608 12:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.608 ************************************ 00:05:35.608 END TEST scheduler_create_thread 00:05:35.608 ************************************ 00:05:35.608 12:31:35 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.867 12:31:35 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:35.867 12:31:35 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60105 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60105 ']' 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60105 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60105 00:05:35.867 killing process with pid 60105 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60105' 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60105 00:05:35.867 12:31:35 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60105 00:05:36.127 [2024-12-14 12:31:35.697818] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:37.509 ************************************ 00:05:37.509 END TEST event_scheduler 00:05:37.509 ************************************ 00:05:37.509 00:05:37.509 real 0m6.451s 00:05:37.509 user 0m13.654s 00:05:37.509 sys 0m0.453s 00:05:37.509 12:31:36 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.509 12:31:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.509 12:31:36 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:37.509 12:31:36 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:37.509 12:31:36 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.509 12:31:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.509 12:31:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.509 ************************************ 00:05:37.509 START TEST app_repeat 00:05:37.509 ************************************ 00:05:37.509 12:31:36 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60222 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60222' 00:05:37.509 Process app_repeat pid: 60222 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:37.509 spdk_app_start Round 0 00:05:37.509 12:31:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60222 /var/tmp/spdk-nbd.sock 00:05:37.509 12:31:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60222 ']' 00:05:37.509 12:31:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.509 12:31:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.509 12:31:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.509 12:31:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.509 12:31:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.509 [2024-12-14 12:31:37.023678] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:37.509 [2024-12-14 12:31:37.023816] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60222 ] 00:05:37.509 [2024-12-14 12:31:37.193298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.769 [2024-12-14 12:31:37.307854] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.769 [2024-12-14 12:31:37.307888] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.338 12:31:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.338 12:31:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:38.338 12:31:37 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.598 Malloc0 00:05:38.598 12:31:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.858 Malloc1 00:05:38.858 12:31:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.858 12:31:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.119 /dev/nbd0 00:05:39.119 12:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.119 12:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.119 1+0 records in 00:05:39.119 1+0 records out 00:05:39.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333129 s, 12.3 MB/s 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.119 12:31:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.119 12:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.119 12:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.119 12:31:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.378 /dev/nbd1 00:05:39.378 12:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.378 12:31:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.378 12:31:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:39.378 12:31:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:39.378 12:31:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:39.378 12:31:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:39.378 12:31:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:39.378 12:31:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:39.378 12:31:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:39.378 12:31:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:39.379 12:31:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.379 1+0 records in 00:05:39.379 1+0 records out 00:05:39.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214428 s, 19.1 MB/s 00:05:39.379 12:31:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.379 12:31:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:39.379 12:31:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.379 12:31:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:39.379 12:31:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:39.379 12:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.379 12:31:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.379 12:31:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.379 12:31:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.379 12:31:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.646 { 00:05:39.646 "nbd_device": "/dev/nbd0", 00:05:39.646 "bdev_name": "Malloc0" 00:05:39.646 }, 00:05:39.646 { 00:05:39.646 "nbd_device": "/dev/nbd1", 00:05:39.646 "bdev_name": "Malloc1" 00:05:39.646 } 00:05:39.646 ]' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.646 { 00:05:39.646 "nbd_device": "/dev/nbd0", 00:05:39.646 "bdev_name": "Malloc0" 00:05:39.646 }, 00:05:39.646 { 00:05:39.646 "nbd_device": "/dev/nbd1", 00:05:39.646 "bdev_name": "Malloc1" 00:05:39.646 } 00:05:39.646 ]' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.646 /dev/nbd1' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.646 /dev/nbd1' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.646 256+0 records in 00:05:39.646 256+0 records out 00:05:39.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136084 s, 77.1 MB/s 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.646 256+0 records in 00:05:39.646 256+0 records out 00:05:39.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024593 s, 42.6 MB/s 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.646 256+0 records in 00:05:39.646 256+0 records out 00:05:39.646 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0323727 s, 32.4 MB/s 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.646 12:31:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.913 12:31:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.172 12:31:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.172 12:31:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.173 12:31:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.432 12:31:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.432 12:31:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.691 12:31:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.073 [2024-12-14 12:31:41.504994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.073 [2024-12-14 12:31:41.611174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.073 [2024-12-14 12:31:41.611179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.073 [2024-12-14 12:31:41.805244] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.073 [2024-12-14 12:31:41.805301] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.983 spdk_app_start Round 1 00:05:43.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.983 12:31:43 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.983 12:31:43 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:43.983 12:31:43 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60222 /var/tmp/spdk-nbd.sock 00:05:43.983 12:31:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60222 ']' 00:05:43.983 12:31:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.983 12:31:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.983 12:31:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.983 12:31:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.983 12:31:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.983 12:31:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.983 12:31:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.983 12:31:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.243 Malloc0 00:05:44.243 12:31:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.502 Malloc1 00:05:44.502 12:31:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.502 12:31:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.502 12:31:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.502 12:31:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.502 12:31:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.502 12:31:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.502 12:31:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.503 12:31:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.763 /dev/nbd0 00:05:44.763 12:31:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.763 12:31:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.763 1+0 records in 00:05:44.763 1+0 records out 00:05:44.763 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000308765 s, 13.3 MB/s 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.763 12:31:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.763 12:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.763 12:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.763 12:31:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.023 /dev/nbd1 00:05:45.023 12:31:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.023 12:31:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.023 1+0 records in 00:05:45.023 1+0 records out 00:05:45.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247977 s, 16.5 MB/s 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.023 12:31:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.023 12:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.023 12:31:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.023 12:31:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.023 12:31:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.023 12:31:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.283 { 00:05:45.283 "nbd_device": "/dev/nbd0", 00:05:45.283 "bdev_name": "Malloc0" 00:05:45.283 }, 00:05:45.283 { 00:05:45.283 "nbd_device": "/dev/nbd1", 00:05:45.283 "bdev_name": "Malloc1" 00:05:45.283 } 00:05:45.283 ]' 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.283 { 00:05:45.283 "nbd_device": "/dev/nbd0", 00:05:45.283 "bdev_name": "Malloc0" 00:05:45.283 }, 00:05:45.283 { 00:05:45.283 "nbd_device": "/dev/nbd1", 00:05:45.283 "bdev_name": "Malloc1" 00:05:45.283 } 00:05:45.283 ]' 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.283 /dev/nbd1' 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.283 /dev/nbd1' 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.283 12:31:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.283 256+0 records in 00:05:45.283 256+0 records out 00:05:45.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00552436 s, 190 MB/s 00:05:45.284 12:31:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.284 12:31:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.284 256+0 records in 00:05:45.284 256+0 records out 00:05:45.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0377232 s, 27.8 MB/s 00:05:45.284 12:31:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.284 12:31:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.284 256+0 records in 00:05:45.284 256+0 records out 00:05:45.284 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273481 s, 38.3 MB/s 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.284 12:31:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.543 12:31:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.802 12:31:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.062 12:31:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.062 12:31:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.630 12:31:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.570 [2024-12-14 12:31:47.272604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.829 [2024-12-14 12:31:47.382164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.829 [2024-12-14 12:31:47.382192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.089 [2024-12-14 12:31:47.571691] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.089 [2024-12-14 12:31:47.571790] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.470 spdk_app_start Round 2 00:05:49.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.470 12:31:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:49.470 12:31:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:49.470 12:31:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60222 /var/tmp/spdk-nbd.sock 00:05:49.470 12:31:49 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60222 ']' 00:05:49.470 12:31:49 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.470 12:31:49 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.470 12:31:49 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.470 12:31:49 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.470 12:31:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.730 12:31:49 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.730 12:31:49 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.730 12:31:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:49.989 Malloc0 00:05:49.989 12:31:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.248 Malloc1 00:05:50.248 12:31:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.248 12:31:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.507 /dev/nbd0 00:05:50.507 12:31:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.507 12:31:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.507 1+0 records in 00:05:50.507 1+0 records out 00:05:50.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424996 s, 9.6 MB/s 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.507 12:31:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.507 12:31:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.507 12:31:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.507 12:31:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:50.767 /dev/nbd1 00:05:50.767 12:31:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:50.767 12:31:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.767 1+0 records in 00:05:50.767 1+0 records out 00:05:50.767 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353972 s, 11.6 MB/s 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.767 12:31:50 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.767 12:31:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.767 12:31:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.767 12:31:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:50.767 12:31:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.767 12:31:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.026 12:31:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.027 { 00:05:51.027 "nbd_device": "/dev/nbd0", 00:05:51.027 "bdev_name": "Malloc0" 00:05:51.027 }, 00:05:51.027 { 00:05:51.027 "nbd_device": "/dev/nbd1", 00:05:51.027 "bdev_name": "Malloc1" 00:05:51.027 } 00:05:51.027 ]' 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.027 { 00:05:51.027 "nbd_device": "/dev/nbd0", 00:05:51.027 "bdev_name": "Malloc0" 00:05:51.027 }, 00:05:51.027 { 00:05:51.027 "nbd_device": "/dev/nbd1", 00:05:51.027 "bdev_name": "Malloc1" 00:05:51.027 } 00:05:51.027 ]' 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.027 /dev/nbd1' 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.027 /dev/nbd1' 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.027 256+0 records in 00:05:51.027 256+0 records out 00:05:51.027 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122107 s, 85.9 MB/s 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.027 12:31:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.287 256+0 records in 00:05:51.287 256+0 records out 00:05:51.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222419 s, 47.1 MB/s 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.287 256+0 records in 00:05:51.287 256+0 records out 00:05:51.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0244271 s, 42.9 MB/s 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.287 12:31:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.547 12:31:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:51.806 12:31:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:51.806 12:31:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.375 12:31:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.756 [2024-12-14 12:31:53.058201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.756 [2024-12-14 12:31:53.160382] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.756 [2024-12-14 12:31:53.160384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.756 [2024-12-14 12:31:53.346441] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.756 [2024-12-14 12:31:53.346520] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.663 12:31:54 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60222 /var/tmp/spdk-nbd.sock 00:05:55.663 12:31:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60222 ']' 00:05:55.663 12:31:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.663 12:31:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.663 12:31:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.663 12:31:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.663 12:31:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.663 12:31:55 event.app_repeat -- event/event.sh@39 -- # killprocess 60222 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60222 ']' 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60222 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60222 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.663 12:31:55 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60222' 00:05:55.664 killing process with pid 60222 00:05:55.664 12:31:55 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60222 00:05:55.664 12:31:55 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60222 00:05:56.602 spdk_app_start is called in Round 0. 00:05:56.602 Shutdown signal received, stop current app iteration 00:05:56.602 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:56.602 spdk_app_start is called in Round 1. 00:05:56.602 Shutdown signal received, stop current app iteration 00:05:56.602 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:56.602 spdk_app_start is called in Round 2. 00:05:56.602 Shutdown signal received, stop current app iteration 00:05:56.602 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:56.602 spdk_app_start is called in Round 3. 00:05:56.602 Shutdown signal received, stop current app iteration 00:05:56.602 12:31:56 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.602 12:31:56 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:56.602 00:05:56.602 real 0m19.247s 00:05:56.602 user 0m41.266s 00:05:56.602 sys 0m2.788s 00:05:56.602 12:31:56 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:56.602 12:31:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.602 ************************************ 00:05:56.602 END TEST app_repeat 00:05:56.602 ************************************ 00:05:56.602 12:31:56 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.602 12:31:56 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.602 12:31:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.602 12:31:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.602 12:31:56 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.602 ************************************ 00:05:56.602 START TEST cpu_locks 00:05:56.602 ************************************ 00:05:56.602 12:31:56 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.862 * Looking for test storage... 00:05:56.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.862 12:31:56 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.862 --rc genhtml_branch_coverage=1 00:05:56.862 --rc genhtml_function_coverage=1 00:05:56.862 --rc genhtml_legend=1 00:05:56.862 --rc geninfo_all_blocks=1 00:05:56.862 --rc geninfo_unexecuted_blocks=1 00:05:56.862 00:05:56.862 ' 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.862 --rc genhtml_branch_coverage=1 00:05:56.862 --rc genhtml_function_coverage=1 00:05:56.862 --rc genhtml_legend=1 00:05:56.862 --rc geninfo_all_blocks=1 00:05:56.862 --rc geninfo_unexecuted_blocks=1 00:05:56.862 00:05:56.862 ' 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.862 --rc genhtml_branch_coverage=1 00:05:56.862 --rc genhtml_function_coverage=1 00:05:56.862 --rc genhtml_legend=1 00:05:56.862 --rc geninfo_all_blocks=1 00:05:56.862 --rc geninfo_unexecuted_blocks=1 00:05:56.862 00:05:56.862 ' 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.862 --rc genhtml_branch_coverage=1 00:05:56.862 --rc genhtml_function_coverage=1 00:05:56.862 --rc genhtml_legend=1 00:05:56.862 --rc geninfo_all_blocks=1 00:05:56.862 --rc geninfo_unexecuted_blocks=1 00:05:56.862 00:05:56.862 ' 00:05:56.862 12:31:56 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.862 12:31:56 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.862 12:31:56 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.862 12:31:56 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.862 12:31:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.862 ************************************ 00:05:56.862 START TEST default_locks 00:05:56.862 ************************************ 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60664 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60664 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60664 ']' 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.863 12:31:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.122 [2024-12-14 12:31:56.607910] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:57.122 [2024-12-14 12:31:56.608100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60664 ] 00:05:57.122 [2024-12-14 12:31:56.781120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.381 [2024-12-14 12:31:56.892647] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.321 12:31:57 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.321 12:31:57 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:58.321 12:31:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60664 00:05:58.321 12:31:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60664 00:05:58.321 12:31:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60664 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60664 ']' 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60664 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60664 00:05:58.581 killing process with pid 60664 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60664' 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60664 00:05:58.581 12:31:58 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60664 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60664 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60664 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60664 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60664 ']' 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.119 ERROR: process (pid: 60664) is no longer running 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.119 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60664) - No such process 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:01.119 00:06:01.119 real 0m4.027s 00:06:01.119 user 0m3.957s 00:06:01.119 sys 0m0.645s 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.119 12:32:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.119 ************************************ 00:06:01.119 END TEST default_locks 00:06:01.119 ************************************ 00:06:01.119 12:32:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:01.119 12:32:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.119 12:32:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.119 12:32:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.119 ************************************ 00:06:01.119 START TEST default_locks_via_rpc 00:06:01.119 ************************************ 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60739 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60739 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60739 ']' 00:06:01.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.119 12:32:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.119 [2024-12-14 12:32:00.701321] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:01.120 [2024-12-14 12:32:00.701440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60739 ] 00:06:01.379 [2024-12-14 12:32:00.872118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.379 [2024-12-14 12:32:00.985587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60739 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60739 00:06:02.318 12:32:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:02.588 12:32:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60739 00:06:02.588 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60739 ']' 00:06:02.588 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60739 00:06:02.588 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:02.588 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.588 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60739 00:06:02.862 killing process with pid 60739 00:06:02.862 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.862 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.862 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60739' 00:06:02.862 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60739 00:06:02.862 12:32:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60739 00:06:05.401 ************************************ 00:06:05.401 END TEST default_locks_via_rpc 00:06:05.401 ************************************ 00:06:05.401 00:06:05.401 real 0m4.097s 00:06:05.401 user 0m4.059s 00:06:05.401 sys 0m0.680s 00:06:05.401 12:32:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.401 12:32:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:05.401 12:32:04 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:05.401 12:32:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:05.401 12:32:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.401 12:32:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:05.401 ************************************ 00:06:05.401 START TEST non_locking_app_on_locked_coremask 00:06:05.401 ************************************ 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60815 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60815 /var/tmp/spdk.sock 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60815 ']' 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.401 12:32:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.401 [2024-12-14 12:32:04.859948] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:05.401 [2024-12-14 12:32:04.860086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60815 ] 00:06:05.401 [2024-12-14 12:32:05.034942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.660 [2024-12-14 12:32:05.147847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60831 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60831 /var/tmp/spdk2.sock 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60831 ']' 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.599 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.600 12:32:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.600 [2024-12-14 12:32:06.074264] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:06.600 [2024-12-14 12:32:06.074463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60831 ] 00:06:06.600 [2024-12-14 12:32:06.244642] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.600 [2024-12-14 12:32:06.244705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.860 [2024-12-14 12:32:06.474082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.399 12:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.399 12:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.399 12:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60815 00:06:09.399 12:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60815 00:06:09.399 12:32:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60815 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60815 ']' 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60815 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60815 00:06:09.399 killing process with pid 60815 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60815' 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60815 00:06:09.399 12:32:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60815 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60831 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60831 ']' 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60831 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60831 00:06:14.680 killing process with pid 60831 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60831' 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60831 00:06:14.680 12:32:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60831 00:06:16.603 ************************************ 00:06:16.603 END TEST non_locking_app_on_locked_coremask 00:06:16.603 ************************************ 00:06:16.603 00:06:16.603 real 0m11.421s 00:06:16.603 user 0m11.611s 00:06:16.603 sys 0m1.261s 00:06:16.603 12:32:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.603 12:32:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.603 12:32:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:16.603 12:32:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.603 12:32:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.603 12:32:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.603 ************************************ 00:06:16.603 START TEST locking_app_on_unlocked_coremask 00:06:16.603 ************************************ 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60981 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60981 /var/tmp/spdk.sock 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60981 ']' 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.603 12:32:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:16.863 [2024-12-14 12:32:16.349600] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:16.863 [2024-12-14 12:32:16.349800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60981 ] 00:06:16.863 [2024-12-14 12:32:16.525235] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:16.863 [2024-12-14 12:32:16.525392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.123 [2024-12-14 12:32:16.640563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60999 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60999 /var/tmp/spdk2.sock 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60999 ']' 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.062 12:32:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.062 [2024-12-14 12:32:17.604509] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:18.062 [2024-12-14 12:32:17.604707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60999 ] 00:06:18.062 [2024-12-14 12:32:17.772191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.322 [2024-12-14 12:32:18.005026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.858 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.858 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.858 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60999 00:06:20.858 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60999 00:06:20.858 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.117 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60981 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60981 ']' 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60981 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60981 00:06:21.118 killing process with pid 60981 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60981' 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60981 00:06:21.118 12:32:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60981 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60999 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60999 ']' 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60999 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60999 00:06:26.393 killing process with pid 60999 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60999' 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60999 00:06:26.393 12:32:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60999 00:06:28.298 00:06:28.298 real 0m11.719s 00:06:28.298 user 0m11.944s 00:06:28.298 sys 0m1.312s 00:06:28.298 ************************************ 00:06:28.298 END TEST locking_app_on_unlocked_coremask 00:06:28.298 ************************************ 00:06:28.298 12:32:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.298 12:32:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.298 12:32:28 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:28.298 12:32:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.298 12:32:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.298 12:32:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.298 ************************************ 00:06:28.298 START TEST locking_app_on_locked_coremask 00:06:28.298 ************************************ 00:06:28.298 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:28.298 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=61155 00:06:28.298 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 61155 /var/tmp/spdk.sock 00:06:28.298 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.298 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61155 ']' 00:06:28.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.558 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.558 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.558 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.558 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.558 12:32:28 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.558 [2024-12-14 12:32:28.131057] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:28.558 [2024-12-14 12:32:28.131263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61155 ] 00:06:28.816 [2024-12-14 12:32:28.307012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.816 [2024-12-14 12:32:28.420441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61171 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61171 /var/tmp/spdk2.sock 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61171 /var/tmp/spdk2.sock 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61171 /var/tmp/spdk2.sock 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61171 ']' 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.755 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.755 [2024-12-14 12:32:29.343913] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:29.755 [2024-12-14 12:32:29.344122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61171 ] 00:06:30.014 [2024-12-14 12:32:29.511823] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 61155 has claimed it. 00:06:30.014 [2024-12-14 12:32:29.511899] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.274 ERROR: process (pid: 61171) is no longer running 00:06:30.274 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61171) - No such process 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 61155 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 61155 00:06:30.274 12:32:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 61155 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 61155 ']' 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 61155 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61155 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61155' 00:06:30.842 killing process with pid 61155 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 61155 00:06:30.842 12:32:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 61155 00:06:33.379 00:06:33.379 real 0m4.839s 00:06:33.379 user 0m5.029s 00:06:33.379 sys 0m0.757s 00:06:33.379 12:32:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.379 12:32:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.379 ************************************ 00:06:33.379 END TEST locking_app_on_locked_coremask 00:06:33.379 ************************************ 00:06:33.379 12:32:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:33.379 12:32:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.379 12:32:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.379 12:32:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:33.379 ************************************ 00:06:33.379 START TEST locking_overlapped_coremask 00:06:33.379 ************************************ 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61240 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61240 /var/tmp/spdk.sock 00:06:33.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61240 ']' 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.379 12:32:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:33.379 [2024-12-14 12:32:33.034586] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:33.379 [2024-12-14 12:32:33.034696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61240 ] 00:06:33.639 [2024-12-14 12:32:33.210868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.639 [2024-12-14 12:32:33.326946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.639 [2024-12-14 12:32:33.327113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.639 [2024-12-14 12:32:33.327153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61264 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61264 /var/tmp/spdk2.sock 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61264 /var/tmp/spdk2.sock 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:34.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61264 /var/tmp/spdk2.sock 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61264 ']' 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.577 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.577 [2024-12-14 12:32:34.310795] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:34.577 [2024-12-14 12:32:34.310921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61264 ] 00:06:34.836 [2024-12-14 12:32:34.480396] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61240 has claimed it. 00:06:34.836 [2024-12-14 12:32:34.480478] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:35.405 ERROR: process (pid: 61264) is no longer running 00:06:35.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61264) - No such process 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61240 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61240 ']' 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61240 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61240 00:06:35.405 killing process with pid 61240 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61240' 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61240 00:06:35.405 12:32:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61240 00:06:37.942 ************************************ 00:06:37.942 END TEST locking_overlapped_coremask 00:06:37.942 ************************************ 00:06:37.942 00:06:37.942 real 0m4.447s 00:06:37.942 user 0m12.086s 00:06:37.942 sys 0m0.587s 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:37.942 12:32:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:37.942 12:32:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:37.942 12:32:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.942 12:32:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.942 ************************************ 00:06:37.942 START TEST locking_overlapped_coremask_via_rpc 00:06:37.942 ************************************ 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61328 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61328 /var/tmp/spdk.sock 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61328 ']' 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.942 12:32:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.942 [2024-12-14 12:32:37.520067] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:37.942 [2024-12-14 12:32:37.520183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61328 ] 00:06:38.202 [2024-12-14 12:32:37.692418] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.202 [2024-12-14 12:32:37.692553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.202 [2024-12-14 12:32:37.813809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.202 [2024-12-14 12:32:37.813980] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.202 [2024-12-14 12:32:37.814015] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61346 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61346 /var/tmp/spdk2.sock 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61346 ']' 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.140 12:32:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.140 [2024-12-14 12:32:38.771253] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:39.140 [2024-12-14 12:32:38.771473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61346 ] 00:06:39.400 [2024-12-14 12:32:38.940911] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.400 [2024-12-14 12:32:38.940963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:39.660 [2024-12-14 12:32:39.181450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.660 [2024-12-14 12:32:39.181567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.660 [2024-12-14 12:32:39.181602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.201 [2024-12-14 12:32:41.388224] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61328 has claimed it. 00:06:42.201 request: 00:06:42.201 { 00:06:42.201 "method": "framework_enable_cpumask_locks", 00:06:42.201 "req_id": 1 00:06:42.201 } 00:06:42.201 Got JSON-RPC error response 00:06:42.201 response: 00:06:42.201 { 00:06:42.201 "code": -32603, 00:06:42.201 "message": "Failed to claim CPU core: 2" 00:06:42.201 } 00:06:42.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61328 /var/tmp/spdk.sock 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61328 ']' 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61346 /var/tmp/spdk2.sock 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61346 ']' 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.201 00:06:42.201 real 0m4.381s 00:06:42.201 user 0m1.299s 00:06:42.201 sys 0m0.190s 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.201 12:32:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.201 ************************************ 00:06:42.201 END TEST locking_overlapped_coremask_via_rpc 00:06:42.201 ************************************ 00:06:42.201 12:32:41 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:42.201 12:32:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61328 ]] 00:06:42.201 12:32:41 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61328 00:06:42.201 12:32:41 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61328 ']' 00:06:42.201 12:32:41 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61328 00:06:42.201 12:32:41 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:42.202 12:32:41 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.202 12:32:41 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61328 00:06:42.202 killing process with pid 61328 00:06:42.202 12:32:41 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.202 12:32:41 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.202 12:32:41 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61328' 00:06:42.202 12:32:41 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61328 00:06:42.202 12:32:41 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61328 00:06:44.738 12:32:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61346 ]] 00:06:44.738 12:32:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61346 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61346 ']' 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61346 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61346 00:06:44.738 killing process with pid 61346 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61346' 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61346 00:06:44.738 12:32:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61346 00:06:47.275 12:32:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.275 12:32:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:47.275 12:32:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61328 ]] 00:06:47.275 12:32:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61328 00:06:47.275 12:32:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61328 ']' 00:06:47.275 Process with pid 61328 is not found 00:06:47.275 Process with pid 61346 is not found 00:06:47.275 12:32:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61328 00:06:47.275 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61328) - No such process 00:06:47.275 12:32:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61328 is not found' 00:06:47.275 12:32:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61346 ]] 00:06:47.275 12:32:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61346 00:06:47.275 12:32:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61346 ']' 00:06:47.275 12:32:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61346 00:06:47.275 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61346) - No such process 00:06:47.275 12:32:46 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61346 is not found' 00:06:47.275 12:32:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:47.275 00:06:47.275 real 0m50.525s 00:06:47.275 user 1m26.252s 00:06:47.275 sys 0m6.622s 00:06:47.275 ************************************ 00:06:47.275 END TEST cpu_locks 00:06:47.275 ************************************ 00:06:47.275 12:32:46 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.275 12:32:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.275 ************************************ 00:06:47.275 END TEST event 00:06:47.275 ************************************ 00:06:47.275 00:06:47.275 real 1m21.550s 00:06:47.275 user 2m28.466s 00:06:47.275 sys 0m10.569s 00:06:47.275 12:32:46 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.275 12:32:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.275 12:32:46 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.275 12:32:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.275 12:32:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.275 12:32:46 -- common/autotest_common.sh@10 -- # set +x 00:06:47.275 ************************************ 00:06:47.275 START TEST thread 00:06:47.275 ************************************ 00:06:47.275 12:32:46 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:47.534 * Looking for test storage... 00:06:47.534 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:47.534 12:32:47 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:47.534 12:32:47 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:47.534 12:32:47 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:47.534 12:32:47 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:47.534 12:32:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.534 12:32:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.534 12:32:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.534 12:32:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.534 12:32:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.534 12:32:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.534 12:32:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.534 12:32:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.535 12:32:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.535 12:32:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.535 12:32:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.535 12:32:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:47.535 12:32:47 thread -- scripts/common.sh@345 -- # : 1 00:06:47.535 12:32:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.535 12:32:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.535 12:32:47 thread -- scripts/common.sh@365 -- # decimal 1 00:06:47.535 12:32:47 thread -- scripts/common.sh@353 -- # local d=1 00:06:47.535 12:32:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.535 12:32:47 thread -- scripts/common.sh@355 -- # echo 1 00:06:47.535 12:32:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.535 12:32:47 thread -- scripts/common.sh@366 -- # decimal 2 00:06:47.535 12:32:47 thread -- scripts/common.sh@353 -- # local d=2 00:06:47.535 12:32:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.535 12:32:47 thread -- scripts/common.sh@355 -- # echo 2 00:06:47.535 12:32:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.535 12:32:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.535 12:32:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.535 12:32:47 thread -- scripts/common.sh@368 -- # return 0 00:06:47.535 12:32:47 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.535 12:32:47 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:47.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.535 --rc genhtml_branch_coverage=1 00:06:47.535 --rc genhtml_function_coverage=1 00:06:47.535 --rc genhtml_legend=1 00:06:47.535 --rc geninfo_all_blocks=1 00:06:47.535 --rc geninfo_unexecuted_blocks=1 00:06:47.535 00:06:47.535 ' 00:06:47.535 12:32:47 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:47.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.535 --rc genhtml_branch_coverage=1 00:06:47.535 --rc genhtml_function_coverage=1 00:06:47.535 --rc genhtml_legend=1 00:06:47.535 --rc geninfo_all_blocks=1 00:06:47.535 --rc geninfo_unexecuted_blocks=1 00:06:47.535 00:06:47.535 ' 00:06:47.535 12:32:47 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:47.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.535 --rc genhtml_branch_coverage=1 00:06:47.535 --rc genhtml_function_coverage=1 00:06:47.535 --rc genhtml_legend=1 00:06:47.535 --rc geninfo_all_blocks=1 00:06:47.535 --rc geninfo_unexecuted_blocks=1 00:06:47.535 00:06:47.535 ' 00:06:47.535 12:32:47 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:47.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.535 --rc genhtml_branch_coverage=1 00:06:47.535 --rc genhtml_function_coverage=1 00:06:47.535 --rc genhtml_legend=1 00:06:47.535 --rc geninfo_all_blocks=1 00:06:47.535 --rc geninfo_unexecuted_blocks=1 00:06:47.535 00:06:47.535 ' 00:06:47.535 12:32:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.535 12:32:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:47.535 12:32:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.535 12:32:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.535 ************************************ 00:06:47.535 START TEST thread_poller_perf 00:06:47.535 ************************************ 00:06:47.535 12:32:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:47.535 [2024-12-14 12:32:47.198178] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:47.535 [2024-12-14 12:32:47.198740] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61541 ] 00:06:47.794 [2024-12-14 12:32:47.371385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.794 [2024-12-14 12:32:47.486293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.794 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:49.175 [2024-12-14T12:32:48.913Z] ====================================== 00:06:49.175 [2024-12-14T12:32:48.913Z] busy:2300369024 (cyc) 00:06:49.175 [2024-12-14T12:32:48.913Z] total_run_count: 403000 00:06:49.175 [2024-12-14T12:32:48.913Z] tsc_hz: 2290000000 (cyc) 00:06:49.175 [2024-12-14T12:32:48.913Z] ====================================== 00:06:49.175 [2024-12-14T12:32:48.913Z] poller_cost: 5708 (cyc), 2492 (nsec) 00:06:49.175 00:06:49.175 real 0m1.568s 00:06:49.175 user 0m1.363s 00:06:49.175 sys 0m0.098s 00:06:49.175 12:32:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:49.175 12:32:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:49.175 ************************************ 00:06:49.175 END TEST thread_poller_perf 00:06:49.175 ************************************ 00:06:49.175 12:32:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.175 12:32:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:49.175 12:32:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:49.175 12:32:48 thread -- common/autotest_common.sh@10 -- # set +x 00:06:49.175 ************************************ 00:06:49.175 START TEST thread_poller_perf 00:06:49.175 ************************************ 00:06:49.175 12:32:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:49.175 [2024-12-14 12:32:48.828662] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:49.175 [2024-12-14 12:32:48.828769] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61583 ] 00:06:49.435 [2024-12-14 12:32:48.991186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.435 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.435 [2024-12-14 12:32:49.100898] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.815 [2024-12-14T12:32:50.553Z] ====================================== 00:06:50.815 [2024-12-14T12:32:50.553Z] busy:2293697170 (cyc) 00:06:50.815 [2024-12-14T12:32:50.553Z] total_run_count: 4785000 00:06:50.815 [2024-12-14T12:32:50.553Z] tsc_hz: 2290000000 (cyc) 00:06:50.815 [2024-12-14T12:32:50.553Z] ====================================== 00:06:50.815 [2024-12-14T12:32:50.553Z] poller_cost: 479 (cyc), 209 (nsec) 00:06:50.815 00:06:50.815 real 0m1.545s 00:06:50.815 user 0m1.347s 00:06:50.815 sys 0m0.093s 00:06:50.815 12:32:50 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.815 ************************************ 00:06:50.815 END TEST thread_poller_perf 00:06:50.815 ************************************ 00:06:50.815 12:32:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:50.815 12:32:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:50.815 00:06:50.815 real 0m3.468s 00:06:50.815 user 0m2.876s 00:06:50.815 sys 0m0.393s 00:06:50.815 ************************************ 00:06:50.815 END TEST thread 00:06:50.815 ************************************ 00:06:50.815 12:32:50 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.815 12:32:50 thread -- common/autotest_common.sh@10 -- # set +x 00:06:50.815 12:32:50 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:50.815 12:32:50 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:50.815 12:32:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:50.815 12:32:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.815 12:32:50 -- common/autotest_common.sh@10 -- # set +x 00:06:50.815 ************************************ 00:06:50.815 START TEST app_cmdline 00:06:50.815 ************************************ 00:06:50.815 12:32:50 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:51.075 * Looking for test storage... 00:06:51.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:51.075 12:32:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.075 --rc genhtml_branch_coverage=1 00:06:51.075 --rc genhtml_function_coverage=1 00:06:51.075 --rc genhtml_legend=1 00:06:51.075 --rc geninfo_all_blocks=1 00:06:51.075 --rc geninfo_unexecuted_blocks=1 00:06:51.075 00:06:51.075 ' 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.075 --rc genhtml_branch_coverage=1 00:06:51.075 --rc genhtml_function_coverage=1 00:06:51.075 --rc genhtml_legend=1 00:06:51.075 --rc geninfo_all_blocks=1 00:06:51.075 --rc geninfo_unexecuted_blocks=1 00:06:51.075 00:06:51.075 ' 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.075 --rc genhtml_branch_coverage=1 00:06:51.075 --rc genhtml_function_coverage=1 00:06:51.075 --rc genhtml_legend=1 00:06:51.075 --rc geninfo_all_blocks=1 00:06:51.075 --rc geninfo_unexecuted_blocks=1 00:06:51.075 00:06:51.075 ' 00:06:51.075 12:32:50 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:51.075 --rc genhtml_branch_coverage=1 00:06:51.075 --rc genhtml_function_coverage=1 00:06:51.075 --rc genhtml_legend=1 00:06:51.075 --rc geninfo_all_blocks=1 00:06:51.075 --rc geninfo_unexecuted_blocks=1 00:06:51.075 00:06:51.075 ' 00:06:51.075 12:32:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:51.075 12:32:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61671 00:06:51.075 12:32:50 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:51.075 12:32:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61671 00:06:51.076 12:32:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61671 ']' 00:06:51.076 12:32:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.076 12:32:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.076 12:32:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.076 12:32:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.076 12:32:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:51.076 [2024-12-14 12:32:50.760591] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:51.076 [2024-12-14 12:32:50.760805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61671 ] 00:06:51.335 [2024-12-14 12:32:50.935431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.335 [2024-12-14 12:32:51.046376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.273 12:32:51 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.273 12:32:51 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:52.273 12:32:51 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:52.532 { 00:06:52.532 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:52.532 "fields": { 00:06:52.532 "major": 25, 00:06:52.532 "minor": 1, 00:06:52.532 "patch": 0, 00:06:52.532 "suffix": "-pre", 00:06:52.532 "commit": "e01cb43b8" 00:06:52.532 } 00:06:52.532 } 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:52.532 12:32:52 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.532 12:32:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.533 12:32:52 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.533 12:32:52 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:52.533 12:32:52 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:52.533 12:32:52 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:52.533 12:32:52 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:52.792 request: 00:06:52.792 { 00:06:52.792 "method": "env_dpdk_get_mem_stats", 00:06:52.792 "req_id": 1 00:06:52.792 } 00:06:52.792 Got JSON-RPC error response 00:06:52.792 response: 00:06:52.792 { 00:06:52.792 "code": -32601, 00:06:52.792 "message": "Method not found" 00:06:52.792 } 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:52.792 12:32:52 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61671 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61671 ']' 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61671 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61671 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61671' 00:06:52.792 killing process with pid 61671 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@973 -- # kill 61671 00:06:52.792 12:32:52 app_cmdline -- common/autotest_common.sh@978 -- # wait 61671 00:06:55.328 00:06:55.328 real 0m4.278s 00:06:55.328 user 0m4.471s 00:06:55.328 sys 0m0.612s 00:06:55.328 12:32:54 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.328 ************************************ 00:06:55.328 END TEST app_cmdline 00:06:55.328 ************************************ 00:06:55.328 12:32:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:55.328 12:32:54 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.328 12:32:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.328 12:32:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.328 12:32:54 -- common/autotest_common.sh@10 -- # set +x 00:06:55.328 ************************************ 00:06:55.328 START TEST version 00:06:55.328 ************************************ 00:06:55.328 12:32:54 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:55.328 * Looking for test storage... 00:06:55.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.329 12:32:54 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.329 12:32:54 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.329 12:32:54 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.329 12:32:54 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.329 12:32:54 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.329 12:32:54 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.329 12:32:54 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.329 12:32:54 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.329 12:32:54 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.329 12:32:54 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.329 12:32:54 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.329 12:32:54 version -- scripts/common.sh@344 -- # case "$op" in 00:06:55.329 12:32:54 version -- scripts/common.sh@345 -- # : 1 00:06:55.329 12:32:54 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.329 12:32:54 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.329 12:32:54 version -- scripts/common.sh@365 -- # decimal 1 00:06:55.329 12:32:54 version -- scripts/common.sh@353 -- # local d=1 00:06:55.329 12:32:54 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.329 12:32:54 version -- scripts/common.sh@355 -- # echo 1 00:06:55.329 12:32:54 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.329 12:32:54 version -- scripts/common.sh@366 -- # decimal 2 00:06:55.329 12:32:54 version -- scripts/common.sh@353 -- # local d=2 00:06:55.329 12:32:54 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.329 12:32:54 version -- scripts/common.sh@355 -- # echo 2 00:06:55.329 12:32:54 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.329 12:32:54 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.329 12:32:54 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.329 12:32:54 version -- scripts/common.sh@368 -- # return 0 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.329 --rc genhtml_branch_coverage=1 00:06:55.329 --rc genhtml_function_coverage=1 00:06:55.329 --rc genhtml_legend=1 00:06:55.329 --rc geninfo_all_blocks=1 00:06:55.329 --rc geninfo_unexecuted_blocks=1 00:06:55.329 00:06:55.329 ' 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.329 --rc genhtml_branch_coverage=1 00:06:55.329 --rc genhtml_function_coverage=1 00:06:55.329 --rc genhtml_legend=1 00:06:55.329 --rc geninfo_all_blocks=1 00:06:55.329 --rc geninfo_unexecuted_blocks=1 00:06:55.329 00:06:55.329 ' 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.329 --rc genhtml_branch_coverage=1 00:06:55.329 --rc genhtml_function_coverage=1 00:06:55.329 --rc genhtml_legend=1 00:06:55.329 --rc geninfo_all_blocks=1 00:06:55.329 --rc geninfo_unexecuted_blocks=1 00:06:55.329 00:06:55.329 ' 00:06:55.329 12:32:54 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.329 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.329 --rc genhtml_branch_coverage=1 00:06:55.329 --rc genhtml_function_coverage=1 00:06:55.329 --rc genhtml_legend=1 00:06:55.329 --rc geninfo_all_blocks=1 00:06:55.329 --rc geninfo_unexecuted_blocks=1 00:06:55.329 00:06:55.329 ' 00:06:55.329 12:32:55 version -- app/version.sh@17 -- # get_header_version major 00:06:55.329 12:32:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.329 12:32:55 version -- app/version.sh@14 -- # cut -f2 00:06:55.329 12:32:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.329 12:32:55 version -- app/version.sh@17 -- # major=25 00:06:55.329 12:32:55 version -- app/version.sh@18 -- # get_header_version minor 00:06:55.329 12:32:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.329 12:32:55 version -- app/version.sh@14 -- # cut -f2 00:06:55.329 12:32:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.329 12:32:55 version -- app/version.sh@18 -- # minor=1 00:06:55.329 12:32:55 version -- app/version.sh@19 -- # get_header_version patch 00:06:55.329 12:32:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.329 12:32:55 version -- app/version.sh@14 -- # cut -f2 00:06:55.329 12:32:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.329 12:32:55 version -- app/version.sh@19 -- # patch=0 00:06:55.329 12:32:55 version -- app/version.sh@20 -- # get_header_version suffix 00:06:55.329 12:32:55 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:55.329 12:32:55 version -- app/version.sh@14 -- # cut -f2 00:06:55.329 12:32:55 version -- app/version.sh@14 -- # tr -d '"' 00:06:55.329 12:32:55 version -- app/version.sh@20 -- # suffix=-pre 00:06:55.329 12:32:55 version -- app/version.sh@22 -- # version=25.1 00:06:55.329 12:32:55 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:55.329 12:32:55 version -- app/version.sh@28 -- # version=25.1rc0 00:06:55.329 12:32:55 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:55.329 12:32:55 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:55.587 12:32:55 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:55.587 12:32:55 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:55.587 00:06:55.587 real 0m0.319s 00:06:55.587 user 0m0.196s 00:06:55.587 sys 0m0.181s 00:06:55.587 ************************************ 00:06:55.587 END TEST version 00:06:55.587 ************************************ 00:06:55.587 12:32:55 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.587 12:32:55 version -- common/autotest_common.sh@10 -- # set +x 00:06:55.587 12:32:55 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:55.587 12:32:55 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:55.587 12:32:55 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:55.587 12:32:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.587 12:32:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.587 12:32:55 -- common/autotest_common.sh@10 -- # set +x 00:06:55.587 ************************************ 00:06:55.587 START TEST bdev_raid 00:06:55.588 ************************************ 00:06:55.588 12:32:55 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:55.588 * Looking for test storage... 00:06:55.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:55.588 12:32:55 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.588 12:32:55 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.588 12:32:55 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.846 12:32:55 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.846 --rc genhtml_branch_coverage=1 00:06:55.846 --rc genhtml_function_coverage=1 00:06:55.846 --rc genhtml_legend=1 00:06:55.846 --rc geninfo_all_blocks=1 00:06:55.846 --rc geninfo_unexecuted_blocks=1 00:06:55.846 00:06:55.846 ' 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.846 --rc genhtml_branch_coverage=1 00:06:55.846 --rc genhtml_function_coverage=1 00:06:55.846 --rc genhtml_legend=1 00:06:55.846 --rc geninfo_all_blocks=1 00:06:55.846 --rc geninfo_unexecuted_blocks=1 00:06:55.846 00:06:55.846 ' 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.846 --rc genhtml_branch_coverage=1 00:06:55.846 --rc genhtml_function_coverage=1 00:06:55.846 --rc genhtml_legend=1 00:06:55.846 --rc geninfo_all_blocks=1 00:06:55.846 --rc geninfo_unexecuted_blocks=1 00:06:55.846 00:06:55.846 ' 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.846 --rc genhtml_branch_coverage=1 00:06:55.846 --rc genhtml_function_coverage=1 00:06:55.846 --rc genhtml_legend=1 00:06:55.846 --rc geninfo_all_blocks=1 00:06:55.846 --rc geninfo_unexecuted_blocks=1 00:06:55.846 00:06:55.846 ' 00:06:55.846 12:32:55 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:55.846 12:32:55 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:55.846 12:32:55 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:55.846 12:32:55 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:55.846 12:32:55 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:55.846 12:32:55 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:55.846 12:32:55 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.846 12:32:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.846 ************************************ 00:06:55.846 START TEST raid1_resize_data_offset_test 00:06:55.846 ************************************ 00:06:55.846 Process raid pid: 61854 00:06:55.846 12:32:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:55.846 12:32:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=61854 00:06:55.846 12:32:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 61854' 00:06:55.846 12:32:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.847 12:32:55 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 61854 00:06:55.847 12:32:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 61854 ']' 00:06:55.847 12:32:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.847 12:32:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.847 12:32:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.847 12:32:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.847 12:32:55 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.847 [2024-12-14 12:32:55.500063] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:55.847 [2024-12-14 12:32:55.500692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:56.105 [2024-12-14 12:32:55.673075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.105 [2024-12-14 12:32:55.784248] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.364 [2024-12-14 12:32:55.981196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.364 [2024-12-14 12:32:55.981293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.622 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.622 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:56.623 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:56.623 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.623 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.882 malloc0 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.882 malloc1 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.882 null0 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.882 [2024-12-14 12:32:56.505871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:56.882 [2024-12-14 12:32:56.507614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:56.882 [2024-12-14 12:32:56.507742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:56.882 [2024-12-14 12:32:56.507948] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:56.882 [2024-12-14 12:32:56.508001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:56.882 [2024-12-14 12:32:56.508319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:56.882 [2024-12-14 12:32:56.508552] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:56.882 [2024-12-14 12:32:56.508603] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:56.882 [2024-12-14 12:32:56.508808] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:56.882 [2024-12-14 12:32:56.561769] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.882 12:32:56 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.450 malloc2 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.450 [2024-12-14 12:32:57.094101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:57.450 [2024-12-14 12:32:57.111529] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.450 [2024-12-14 12:32:57.113435] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 61854 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 61854 ']' 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 61854 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.450 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61854 00:06:57.709 killing process with pid 61854 00:06:57.709 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.709 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.709 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61854' 00:06:57.709 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 61854 00:06:57.709 12:32:57 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 61854 00:06:57.709 [2024-12-14 12:32:57.210848] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:57.709 [2024-12-14 12:32:57.211908] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:57.709 [2024-12-14 12:32:57.212081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.709 [2024-12-14 12:32:57.212104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:57.709 [2024-12-14 12:32:57.247143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:57.709 [2024-12-14 12:32:57.247528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:57.709 [2024-12-14 12:32:57.247596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:59.617 [2024-12-14 12:32:59.073505] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.997 12:33:00 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:00.997 00:07:00.997 real 0m4.906s 00:07:00.997 user 0m4.784s 00:07:00.997 sys 0m0.560s 00:07:00.997 12:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.997 12:33:00 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.997 ************************************ 00:07:00.997 END TEST raid1_resize_data_offset_test 00:07:00.997 ************************************ 00:07:00.997 12:33:00 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:00.997 12:33:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.997 12:33:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.997 12:33:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.997 ************************************ 00:07:00.997 START TEST raid0_resize_superblock_test 00:07:00.997 ************************************ 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61943 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61943' 00:07:00.997 Process raid pid: 61943 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61943 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61943 ']' 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.997 12:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.998 12:33:00 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.998 [2024-12-14 12:33:00.458536] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:00.998 [2024-12-14 12:33:00.458732] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:00.998 [2024-12-14 12:33:00.613284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.257 [2024-12-14 12:33:00.757505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.517 [2024-12-14 12:33:00.997963] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.517 [2024-12-14 12:33:00.998142] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:01.776 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.776 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:01.776 12:33:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:01.776 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.776 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.345 malloc0 00:07:02.345 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.345 12:33:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:02.345 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.345 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.345 [2024-12-14 12:33:01.943658] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:02.345 [2024-12-14 12:33:01.943851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.345 [2024-12-14 12:33:01.943903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:02.345 [2024-12-14 12:33:01.943942] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.345 [2024-12-14 12:33:01.946425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.345 [2024-12-14 12:33:01.946522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:02.345 pt0 00:07:02.345 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.345 12:33:01 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:02.345 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.345 12:33:01 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 1a10d54e-2550-403f-af65-5889d6b0a69b 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 e8ed3fa0-8046-44fc-87a2-e4664ece0cc2 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 58c797be-a4c5-4923-9fc8-1ebb53c7c743 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 [2024-12-14 12:33:02.140886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e8ed3fa0-8046-44fc-87a2-e4664ece0cc2 is claimed 00:07:02.605 [2024-12-14 12:33:02.141003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 58c797be-a4c5-4923-9fc8-1ebb53c7c743 is claimed 00:07:02.605 [2024-12-14 12:33:02.141167] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:02.605 [2024-12-14 12:33:02.141187] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:02.605 [2024-12-14 12:33:02.141471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:02.605 [2024-12-14 12:33:02.141705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:02.605 [2024-12-14 12:33:02.141719] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:02.605 [2024-12-14 12:33:02.141903] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:02.605 [2024-12-14 12:33:02.244893] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 [2024-12-14 12:33:02.292748] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:02.605 [2024-12-14 12:33:02.292777] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'e8ed3fa0-8046-44fc-87a2-e4664ece0cc2' was resized: old size 131072, new size 204800 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 [2024-12-14 12:33:02.300681] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:02.605 [2024-12-14 12:33:02.300785] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '58c797be-a4c5-4923-9fc8-1ebb53c7c743' was resized: old size 131072, new size 204800 00:07:02.605 [2024-12-14 12:33:02.300820] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:02.605 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:02.865 [2024-12-14 12:33:02.408665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.865 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.865 [2024-12-14 12:33:02.440450] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:02.865 [2024-12-14 12:33:02.440632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:02.866 [2024-12-14 12:33:02.440680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:02.866 [2024-12-14 12:33:02.440723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:02.866 [2024-12-14 12:33:02.440893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.866 [2024-12-14 12:33:02.440975] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.866 [2024-12-14 12:33:02.441058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 [2024-12-14 12:33:02.448304] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:02.866 [2024-12-14 12:33:02.448438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.866 [2024-12-14 12:33:02.448482] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:02.866 [2024-12-14 12:33:02.448522] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.866 [2024-12-14 12:33:02.451139] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.866 [2024-12-14 12:33:02.451248] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:02.866 [2024-12-14 12:33:02.453189] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev e8ed3fa0-8046-44fc-87a2-e4664ece0cc2 00:07:02.866 [2024-12-14 12:33:02.453331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev e8ed3fa0-8046-44fc-87a2-e4664ece0cc2 pt0 00:07:02.866 is claimed 00:07:02.866 [2024-12-14 12:33:02.453516] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 58c797be-a4c5-4923-9fc8-1ebb53c7c743 00:07:02.866 [2024-12-14 12:33:02.453605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 58c797be-a4c5-4923-9fc8-1ebb53c7c743 is claimed 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.866 [2024-12-14 12:33:02.453812] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 58c797be-a4c5-49 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:02.866 23-9fc8-1ebb53c7c743 (2) smaller than existing raid bdev Raid (3) 00:07:02.866 [2024-12-14 12:33:02.453921] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev e8ed3fa0-8046-44fc-87a2-e4664ece0cc2: File exists 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.866 [2024-12-14 12:33:02.454062] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 [2024-12-14 12:33:02.454110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:02.866 [2024-12-14 12:33:02.454400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:02.866 [2024-12-14 12:33:02.454634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:02.866 [2024-12-14 12:33:02.454682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:02.866 [2024-12-14 12:33:02.454896] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:02.866 [2024-12-14 12:33:02.472505] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61943 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61943 ']' 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61943 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61943 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61943' 00:07:02.866 killing process with pid 61943 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61943 00:07:02.866 [2024-12-14 12:33:02.538409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.866 12:33:02 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61943 00:07:02.866 [2024-12-14 12:33:02.538569] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:02.866 [2024-12-14 12:33:02.538655] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:02.866 [2024-12-14 12:33:02.538708] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:04.772 [2024-12-14 12:33:04.094513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:05.712 12:33:05 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:05.712 00:07:05.712 real 0m4.941s 00:07:05.712 user 0m4.889s 00:07:05.712 sys 0m0.735s 00:07:05.712 12:33:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.712 12:33:05 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.712 ************************************ 00:07:05.712 END TEST raid0_resize_superblock_test 00:07:05.712 ************************************ 00:07:05.712 12:33:05 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:05.712 12:33:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:05.712 12:33:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.712 12:33:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:05.712 ************************************ 00:07:05.712 START TEST raid1_resize_superblock_test 00:07:05.712 ************************************ 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:05.712 Process raid pid: 62042 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=62042 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 62042' 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 62042 00:07:05.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62042 ']' 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.712 12:33:05 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:05.972 [2024-12-14 12:33:05.467394] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:05.972 [2024-12-14 12:33:05.467973] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:05.972 [2024-12-14 12:33:05.641584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.231 [2024-12-14 12:33:05.776879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.491 [2024-12-14 12:33:06.016057] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.491 [2024-12-14 12:33:06.016118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:06.750 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.750 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:06.750 12:33:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:06.750 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.750 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.318 malloc0 00:07:07.318 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.318 12:33:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:07.318 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.318 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.318 [2024-12-14 12:33:06.909825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:07.318 [2024-12-14 12:33:06.910009] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.318 [2024-12-14 12:33:06.910074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:07.318 [2024-12-14 12:33:06.910122] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.318 [2024-12-14 12:33:06.912624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.318 [2024-12-14 12:33:06.912717] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:07.318 pt0 00:07:07.318 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.318 12:33:06 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:07.318 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.318 12:33:06 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 c5601e1f-e34a-4828-8beb-1fa60dacda21 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 72c2f161-42b5-4602-a495-395d6fe7d56d 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 f24f305a-148a-4ad4-8084-41321cdd4e8d 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 [2024-12-14 12:33:07.103370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 72c2f161-42b5-4602-a495-395d6fe7d56d is claimed 00:07:07.578 [2024-12-14 12:33:07.103593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f24f305a-148a-4ad4-8084-41321cdd4e8d is claimed 00:07:07.578 [2024-12-14 12:33:07.103755] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:07.578 [2024-12-14 12:33:07.103774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:07.578 [2024-12-14 12:33:07.104095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:07.578 [2024-12-14 12:33:07.104333] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:07.578 [2024-12-14 12:33:07.104356] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:07.578 [2024-12-14 12:33:07.104532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 [2024-12-14 12:33:07.215398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 [2024-12-14 12:33:07.239385] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.578 [2024-12-14 12:33:07.239422] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '72c2f161-42b5-4602-a495-395d6fe7d56d' was resized: old size 131072, new size 204800 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 [2024-12-14 12:33:07.247247] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:07.578 [2024-12-14 12:33:07.247275] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'f24f305a-148a-4ad4-8084-41321cdd4e8d' was resized: old size 131072, new size 204800 00:07:07.578 [2024-12-14 12:33:07.247306] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.578 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.838 [2024-12-14 12:33:07.355182] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.838 [2024-12-14 12:33:07.398896] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:07.838 [2024-12-14 12:33:07.399118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:07.838 [2024-12-14 12:33:07.399157] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:07.838 [2024-12-14 12:33:07.399374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.838 [2024-12-14 12:33:07.399658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.838 [2024-12-14 12:33:07.399734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.838 [2024-12-14 12:33:07.399751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.838 [2024-12-14 12:33:07.406770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:07.838 [2024-12-14 12:33:07.406887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:07.838 [2024-12-14 12:33:07.406916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:07.838 [2024-12-14 12:33:07.406930] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:07.838 [2024-12-14 12:33:07.409500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:07.838 [2024-12-14 12:33:07.409546] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:07.838 [2024-12-14 12:33:07.411409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 72c2f161-42b5-4602-a495-395d6fe7d56d 00:07:07.838 [2024-12-14 12:33:07.411500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 72c2f161-42b5-4602-a495-395d6fe7d56d is claimed 00:07:07.838 [2024-12-14 12:33:07.411618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev f24f305a-148a-4ad4-8084-41321cdd4e8d 00:07:07.838 [2024-12-14 12:33:07.411640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev f24f305a-148a-4ad4-8084-41321cdd4e8d is claimed 00:07:07.838 [2024-12-14 12:33:07.411801] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev f24f305a-148a-4ad4-8084-41321cdd4e8d (2) smaller than existing raid bdev Raid (3) 00:07:07.838 [2024-12-14 12:33:07.411829] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 72c2f161-42b5-4602-a495-395d6fe7d56d: File exists 00:07:07.838 [2024-12-14 12:33:07.411871] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:07.838 [2024-12-14 12:33:07.411886] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:07.838 [2024-12-14 12:33:07.412169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:07.838 pt0 00:07:07.838 [2024-12-14 12:33:07.412358] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:07.838 [2024-12-14 12:33:07.412379] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.838 [2024-12-14 12:33:07.412540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:07.838 [2024-12-14 12:33:07.427679] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 62042 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62042 ']' 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62042 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62042 00:07:07.838 killing process with pid 62042 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62042' 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 62042 00:07:07.838 12:33:07 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 62042 00:07:07.838 [2024-12-14 12:33:07.508031] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.838 [2024-12-14 12:33:07.508193] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.838 [2024-12-14 12:33:07.508276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.838 [2024-12-14 12:33:07.508288] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:09.779 [2024-12-14 12:33:09.055097] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:10.717 12:33:10 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:10.717 00:07:10.717 real 0m4.912s 00:07:10.717 user 0m4.889s 00:07:10.717 sys 0m0.731s 00:07:10.717 12:33:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.717 ************************************ 00:07:10.717 END TEST raid1_resize_superblock_test 00:07:10.717 ************************************ 00:07:10.717 12:33:10 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.717 12:33:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:10.717 12:33:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:10.717 12:33:10 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:10.717 12:33:10 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:10.717 12:33:10 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:10.717 12:33:10 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:10.717 12:33:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:10.717 12:33:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.717 12:33:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:10.717 ************************************ 00:07:10.717 START TEST raid_function_test_raid0 00:07:10.717 ************************************ 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=62150 00:07:10.717 Process raid pid: 62150 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 62150' 00:07:10.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 62150 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 62150 ']' 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:10.717 12:33:10 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:10.977 [2024-12-14 12:33:10.457470] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:10.977 [2024-12-14 12:33:10.457578] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.977 [2024-12-14 12:33:10.613343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.236 [2024-12-14 12:33:10.750923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.495 [2024-12-14 12:33:10.988129] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.495 [2024-12-14 12:33:10.988190] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:11.754 Base_1 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:11.754 Base_2 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:11.754 [2024-12-14 12:33:11.381289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:11.754 [2024-12-14 12:33:11.383447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:11.754 [2024-12-14 12:33:11.383534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:11.754 [2024-12-14 12:33:11.383549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:11.754 [2024-12-14 12:33:11.383828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:11.754 [2024-12-14 12:33:11.383996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:11.754 [2024-12-14 12:33:11.384006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:11.754 [2024-12-14 12:33:11.384186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:11.754 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:12.014 [2024-12-14 12:33:11.597088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:12.014 /dev/nbd0 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:12.014 1+0 records in 00:07:12.014 1+0 records out 00:07:12.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403093 s, 10.2 MB/s 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:12.014 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:12.273 { 00:07:12.273 "nbd_device": "/dev/nbd0", 00:07:12.273 "bdev_name": "raid" 00:07:12.273 } 00:07:12.273 ]' 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:12.273 { 00:07:12.273 "nbd_device": "/dev/nbd0", 00:07:12.273 "bdev_name": "raid" 00:07:12.273 } 00:07:12.273 ]' 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:12.273 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:12.274 4096+0 records in 00:07:12.274 4096+0 records out 00:07:12.274 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0358413 s, 58.5 MB/s 00:07:12.274 12:33:11 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:12.533 4096+0 records in 00:07:12.533 4096+0 records out 00:07:12.533 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.241008 s, 8.7 MB/s 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:12.533 128+0 records in 00:07:12.533 128+0 records out 00:07:12.533 65536 bytes (66 kB, 64 KiB) copied, 0.00106563 s, 61.5 MB/s 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:12.533 2035+0 records in 00:07:12.533 2035+0 records out 00:07:12.533 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0143543 s, 72.6 MB/s 00:07:12.533 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:12.792 456+0 records in 00:07:12.792 456+0 records out 00:07:12.792 233472 bytes (233 kB, 228 KiB) copied, 0.0024455 s, 95.5 MB/s 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.792 [2024-12-14 12:33:12.526296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.792 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:13.051 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 62150 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 62150 ']' 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 62150 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62150 00:07:13.311 killing process with pid 62150 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62150' 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 62150 00:07:13.311 12:33:12 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 62150 00:07:13.311 [2024-12-14 12:33:12.833336] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:13.311 [2024-12-14 12:33:12.833511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:13.311 [2024-12-14 12:33:12.833574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:13.311 [2024-12-14 12:33:12.833595] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:13.570 [2024-12-14 12:33:13.057509] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.949 ************************************ 00:07:14.949 END TEST raid_function_test_raid0 00:07:14.949 ************************************ 00:07:14.949 12:33:14 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:14.949 00:07:14.949 real 0m3.928s 00:07:14.949 user 0m4.365s 00:07:14.949 sys 0m1.020s 00:07:14.949 12:33:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.949 12:33:14 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:14.949 12:33:14 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:14.949 12:33:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.949 12:33:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.949 12:33:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.949 ************************************ 00:07:14.949 START TEST raid_function_test_concat 00:07:14.949 ************************************ 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:14.949 Process raid pid: 62279 00:07:14.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=62279 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 62279' 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 62279 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 62279 ']' 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:14.949 12:33:14 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.949 [2024-12-14 12:33:14.448819] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:14.949 [2024-12-14 12:33:14.448962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.949 [2024-12-14 12:33:14.604434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.209 [2024-12-14 12:33:14.748262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.468 [2024-12-14 12:33:14.991746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.468 [2024-12-14 12:33:14.991814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:15.727 Base_1 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:15.727 Base_2 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:15.727 [2024-12-14 12:33:15.391911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:15.727 [2024-12-14 12:33:15.393947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:15.727 [2024-12-14 12:33:15.394186] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:15.727 [2024-12-14 12:33:15.394207] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:15.727 [2024-12-14 12:33:15.394487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.727 [2024-12-14 12:33:15.394679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:15.727 [2024-12-14 12:33:15.394690] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:15.727 [2024-12-14 12:33:15.394869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:15.727 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:15.986 [2024-12-14 12:33:15.639573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:15.986 /dev/nbd0 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:15.986 1+0 records in 00:07:15.986 1+0 records out 00:07:15.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395596 s, 10.4 MB/s 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:15.986 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:16.245 { 00:07:16.245 "nbd_device": "/dev/nbd0", 00:07:16.245 "bdev_name": "raid" 00:07:16.245 } 00:07:16.245 ]' 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:16.245 { 00:07:16.245 "nbd_device": "/dev/nbd0", 00:07:16.245 "bdev_name": "raid" 00:07:16.245 } 00:07:16.245 ]' 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:16.245 12:33:15 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:16.505 4096+0 records in 00:07:16.505 4096+0 records out 00:07:16.505 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0349382 s, 60.0 MB/s 00:07:16.505 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:16.765 4096+0 records in 00:07:16.765 4096+0 records out 00:07:16.765 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.268043 s, 7.8 MB/s 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:16.765 128+0 records in 00:07:16.765 128+0 records out 00:07:16.765 65536 bytes (66 kB, 64 KiB) copied, 0.00113776 s, 57.6 MB/s 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:16.765 2035+0 records in 00:07:16.765 2035+0 records out 00:07:16.765 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00783154 s, 133 MB/s 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:16.765 456+0 records in 00:07:16.765 456+0 records out 00:07:16.765 233472 bytes (233 kB, 228 KiB) copied, 0.00375957 s, 62.1 MB/s 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:16.765 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.025 [2024-12-14 12:33:16.582573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:17.025 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 62279 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 62279 ']' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 62279 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62279 00:07:17.285 killing process with pid 62279 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62279' 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 62279 00:07:17.285 12:33:16 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 62279 00:07:17.285 [2024-12-14 12:33:16.878517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:17.285 [2024-12-14 12:33:16.878753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:17.285 [2024-12-14 12:33:16.878830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:17.285 [2024-12-14 12:33:16.878845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:17.545 [2024-12-14 12:33:17.108318] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.927 12:33:18 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:18.927 00:07:18.927 real 0m3.969s 00:07:18.927 user 0m4.383s 00:07:18.927 sys 0m1.064s 00:07:18.927 12:33:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.927 12:33:18 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:18.927 ************************************ 00:07:18.927 END TEST raid_function_test_concat 00:07:18.927 ************************************ 00:07:18.927 12:33:18 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:18.927 12:33:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.927 12:33:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.927 12:33:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.927 ************************************ 00:07:18.927 START TEST raid0_resize_test 00:07:18.927 ************************************ 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=62406 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 62406' 00:07:18.927 Process raid pid: 62406 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 62406 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 62406 ']' 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.927 12:33:18 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.927 [2024-12-14 12:33:18.492399] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:18.927 [2024-12-14 12:33:18.492712] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.187 [2024-12-14 12:33:18.681924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.187 [2024-12-14 12:33:18.821142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.446 [2024-12-14 12:33:19.060549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.446 [2024-12-14 12:33:19.060625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.706 Base_1 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.706 Base_2 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.706 [2024-12-14 12:33:19.340150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:19.706 [2024-12-14 12:33:19.342259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:19.706 [2024-12-14 12:33:19.342324] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:19.706 [2024-12-14 12:33:19.342338] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:19.706 [2024-12-14 12:33:19.342618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:19.706 [2024-12-14 12:33:19.342757] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:19.706 [2024-12-14 12:33:19.342767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:19.706 [2024-12-14 12:33:19.342938] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.706 [2024-12-14 12:33:19.352161] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:19.706 [2024-12-14 12:33:19.352208] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:19.706 true 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.706 [2024-12-14 12:33:19.364337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.706 [2024-12-14 12:33:19.408005] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:19.706 [2024-12-14 12:33:19.408104] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:19.706 [2024-12-14 12:33:19.408193] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:19.706 true 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.706 [2024-12-14 12:33:19.424159] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:19.706 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 62406 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 62406 ']' 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 62406 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62406 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:19.966 killing process with pid 62406 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62406' 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 62406 00:07:19.966 [2024-12-14 12:33:19.509437] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:19.966 [2024-12-14 12:33:19.509559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:19.966 12:33:19 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 62406 00:07:19.966 [2024-12-14 12:33:19.509623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:19.966 [2024-12-14 12:33:19.509635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:19.966 [2024-12-14 12:33:19.528018] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.358 ************************************ 00:07:21.358 END TEST raid0_resize_test 00:07:21.358 ************************************ 00:07:21.358 12:33:20 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:21.358 00:07:21.358 real 0m2.373s 00:07:21.358 user 0m2.415s 00:07:21.358 sys 0m0.427s 00:07:21.358 12:33:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.358 12:33:20 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 12:33:20 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:21.358 12:33:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:21.358 12:33:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.358 12:33:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 ************************************ 00:07:21.358 START TEST raid1_resize_test 00:07:21.358 ************************************ 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=62463 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 62463' 00:07:21.358 Process raid pid: 62463 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 62463 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 62463 ']' 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.358 12:33:20 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.358 [2024-12-14 12:33:20.916446] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:21.358 [2024-12-14 12:33:20.916576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.358 [2024-12-14 12:33:21.070869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.617 [2024-12-14 12:33:21.211194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.877 [2024-12-14 12:33:21.454639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.877 [2024-12-14 12:33:21.454698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.137 Base_1 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.137 Base_2 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.137 [2024-12-14 12:33:21.812530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:22.137 [2024-12-14 12:33:21.814615] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:22.137 [2024-12-14 12:33:21.814807] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:22.137 [2024-12-14 12:33:21.814828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:22.137 [2024-12-14 12:33:21.815130] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:22.137 [2024-12-14 12:33:21.815279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:22.137 [2024-12-14 12:33:21.815289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:22.137 [2024-12-14 12:33:21.815434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.137 [2024-12-14 12:33:21.824488] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:22.137 [2024-12-14 12:33:21.824524] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:22.137 true 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.137 [2024-12-14 12:33:21.840645] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.137 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.397 [2024-12-14 12:33:21.884451] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:22.397 [2024-12-14 12:33:21.884587] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:22.397 [2024-12-14 12:33:21.884667] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:22.397 true 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.397 [2024-12-14 12:33:21.896561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 62463 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 62463 ']' 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 62463 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62463 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62463' 00:07:22.397 killing process with pid 62463 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 62463 00:07:22.397 [2024-12-14 12:33:21.976379] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.397 12:33:21 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 62463 00:07:22.397 [2024-12-14 12:33:21.976616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.397 [2024-12-14 12:33:21.977279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.397 [2024-12-14 12:33:21.977365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:22.397 [2024-12-14 12:33:21.995498] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.775 12:33:23 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:23.775 00:07:23.775 real 0m2.419s 00:07:23.775 user 0m2.497s 00:07:23.775 sys 0m0.408s 00:07:23.775 12:33:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.775 12:33:23 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.775 ************************************ 00:07:23.775 END TEST raid1_resize_test 00:07:23.775 ************************************ 00:07:23.775 12:33:23 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:23.775 12:33:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:23.775 12:33:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:23.775 12:33:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:23.775 12:33:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.775 12:33:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.775 ************************************ 00:07:23.775 START TEST raid_state_function_test 00:07:23.775 ************************************ 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62520 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62520' 00:07:23.775 Process raid pid: 62520 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62520 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62520 ']' 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.775 12:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.775 [2024-12-14 12:33:23.412033] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:23.775 [2024-12-14 12:33:23.412244] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.034 [2024-12-14 12:33:23.586665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.034 [2024-12-14 12:33:23.730783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.294 [2024-12-14 12:33:23.969257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.294 [2024-12-14 12:33:23.969442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.554 [2024-12-14 12:33:24.247935] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.554 [2024-12-14 12:33:24.248138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.554 [2024-12-14 12:33:24.248180] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.554 [2024-12-14 12:33:24.248211] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:24.554 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.813 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.813 "name": "Existed_Raid", 00:07:24.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.813 "strip_size_kb": 64, 00:07:24.813 "state": "configuring", 00:07:24.813 "raid_level": "raid0", 00:07:24.813 "superblock": false, 00:07:24.813 "num_base_bdevs": 2, 00:07:24.813 "num_base_bdevs_discovered": 0, 00:07:24.813 "num_base_bdevs_operational": 2, 00:07:24.813 "base_bdevs_list": [ 00:07:24.813 { 00:07:24.813 "name": "BaseBdev1", 00:07:24.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.813 "is_configured": false, 00:07:24.813 "data_offset": 0, 00:07:24.813 "data_size": 0 00:07:24.813 }, 00:07:24.813 { 00:07:24.813 "name": "BaseBdev2", 00:07:24.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.813 "is_configured": false, 00:07:24.813 "data_offset": 0, 00:07:24.813 "data_size": 0 00:07:24.813 } 00:07:24.813 ] 00:07:24.813 }' 00:07:24.813 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.813 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.073 [2024-12-14 12:33:24.659229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.073 [2024-12-14 12:33:24.659393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.073 [2024-12-14 12:33:24.671146] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:25.073 [2024-12-14 12:33:24.671254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:25.073 [2024-12-14 12:33:24.671286] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.073 [2024-12-14 12:33:24.671319] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.073 [2024-12-14 12:33:24.725664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.073 BaseBdev1 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.073 [ 00:07:25.073 { 00:07:25.073 "name": "BaseBdev1", 00:07:25.073 "aliases": [ 00:07:25.073 "738c61ab-c7cd-4728-b1f0-c29fedd23b0d" 00:07:25.073 ], 00:07:25.073 "product_name": "Malloc disk", 00:07:25.073 "block_size": 512, 00:07:25.073 "num_blocks": 65536, 00:07:25.073 "uuid": "738c61ab-c7cd-4728-b1f0-c29fedd23b0d", 00:07:25.073 "assigned_rate_limits": { 00:07:25.073 "rw_ios_per_sec": 0, 00:07:25.073 "rw_mbytes_per_sec": 0, 00:07:25.073 "r_mbytes_per_sec": 0, 00:07:25.073 "w_mbytes_per_sec": 0 00:07:25.073 }, 00:07:25.073 "claimed": true, 00:07:25.073 "claim_type": "exclusive_write", 00:07:25.073 "zoned": false, 00:07:25.073 "supported_io_types": { 00:07:25.073 "read": true, 00:07:25.073 "write": true, 00:07:25.073 "unmap": true, 00:07:25.073 "flush": true, 00:07:25.073 "reset": true, 00:07:25.073 "nvme_admin": false, 00:07:25.073 "nvme_io": false, 00:07:25.073 "nvme_io_md": false, 00:07:25.073 "write_zeroes": true, 00:07:25.073 "zcopy": true, 00:07:25.073 "get_zone_info": false, 00:07:25.073 "zone_management": false, 00:07:25.073 "zone_append": false, 00:07:25.073 "compare": false, 00:07:25.073 "compare_and_write": false, 00:07:25.073 "abort": true, 00:07:25.073 "seek_hole": false, 00:07:25.073 "seek_data": false, 00:07:25.073 "copy": true, 00:07:25.073 "nvme_iov_md": false 00:07:25.073 }, 00:07:25.073 "memory_domains": [ 00:07:25.073 { 00:07:25.073 "dma_device_id": "system", 00:07:25.073 "dma_device_type": 1 00:07:25.073 }, 00:07:25.073 { 00:07:25.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.073 "dma_device_type": 2 00:07:25.073 } 00:07:25.073 ], 00:07:25.073 "driver_specific": {} 00:07:25.073 } 00:07:25.073 ] 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.073 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.074 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.074 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.074 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.333 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.333 "name": "Existed_Raid", 00:07:25.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.333 "strip_size_kb": 64, 00:07:25.333 "state": "configuring", 00:07:25.333 "raid_level": "raid0", 00:07:25.333 "superblock": false, 00:07:25.333 "num_base_bdevs": 2, 00:07:25.333 "num_base_bdevs_discovered": 1, 00:07:25.333 "num_base_bdevs_operational": 2, 00:07:25.333 "base_bdevs_list": [ 00:07:25.333 { 00:07:25.333 "name": "BaseBdev1", 00:07:25.333 "uuid": "738c61ab-c7cd-4728-b1f0-c29fedd23b0d", 00:07:25.333 "is_configured": true, 00:07:25.333 "data_offset": 0, 00:07:25.333 "data_size": 65536 00:07:25.333 }, 00:07:25.333 { 00:07:25.333 "name": "BaseBdev2", 00:07:25.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.333 "is_configured": false, 00:07:25.333 "data_offset": 0, 00:07:25.333 "data_size": 0 00:07:25.333 } 00:07:25.333 ] 00:07:25.333 }' 00:07:25.333 12:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.333 12:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.593 [2024-12-14 12:33:25.229075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:25.593 [2024-12-14 12:33:25.229254] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.593 [2024-12-14 12:33:25.241064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.593 [2024-12-14 12:33:25.243427] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:25.593 [2024-12-14 12:33:25.243531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:25.593 "name": "Existed_Raid", 00:07:25.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.593 "strip_size_kb": 64, 00:07:25.593 "state": "configuring", 00:07:25.593 "raid_level": "raid0", 00:07:25.593 "superblock": false, 00:07:25.593 "num_base_bdevs": 2, 00:07:25.593 "num_base_bdevs_discovered": 1, 00:07:25.593 "num_base_bdevs_operational": 2, 00:07:25.593 "base_bdevs_list": [ 00:07:25.593 { 00:07:25.593 "name": "BaseBdev1", 00:07:25.593 "uuid": "738c61ab-c7cd-4728-b1f0-c29fedd23b0d", 00:07:25.593 "is_configured": true, 00:07:25.593 "data_offset": 0, 00:07:25.593 "data_size": 65536 00:07:25.593 }, 00:07:25.593 { 00:07:25.593 "name": "BaseBdev2", 00:07:25.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.593 "is_configured": false, 00:07:25.593 "data_offset": 0, 00:07:25.593 "data_size": 0 00:07:25.593 } 00:07:25.593 ] 00:07:25.593 }' 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:25.593 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.162 [2024-12-14 12:33:25.702286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:26.162 [2024-12-14 12:33:25.702461] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:26.162 [2024-12-14 12:33:25.702479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:26.162 [2024-12-14 12:33:25.702815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:26.162 [2024-12-14 12:33:25.703065] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:26.162 [2024-12-14 12:33:25.703084] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:26.162 [2024-12-14 12:33:25.703457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:26.162 BaseBdev2 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.162 [ 00:07:26.162 { 00:07:26.162 "name": "BaseBdev2", 00:07:26.162 "aliases": [ 00:07:26.162 "50275d47-2724-4024-9de1-3e578c83892c" 00:07:26.162 ], 00:07:26.162 "product_name": "Malloc disk", 00:07:26.162 "block_size": 512, 00:07:26.162 "num_blocks": 65536, 00:07:26.162 "uuid": "50275d47-2724-4024-9de1-3e578c83892c", 00:07:26.162 "assigned_rate_limits": { 00:07:26.162 "rw_ios_per_sec": 0, 00:07:26.162 "rw_mbytes_per_sec": 0, 00:07:26.162 "r_mbytes_per_sec": 0, 00:07:26.162 "w_mbytes_per_sec": 0 00:07:26.162 }, 00:07:26.162 "claimed": true, 00:07:26.162 "claim_type": "exclusive_write", 00:07:26.162 "zoned": false, 00:07:26.162 "supported_io_types": { 00:07:26.162 "read": true, 00:07:26.162 "write": true, 00:07:26.162 "unmap": true, 00:07:26.162 "flush": true, 00:07:26.162 "reset": true, 00:07:26.162 "nvme_admin": false, 00:07:26.162 "nvme_io": false, 00:07:26.162 "nvme_io_md": false, 00:07:26.162 "write_zeroes": true, 00:07:26.162 "zcopy": true, 00:07:26.162 "get_zone_info": false, 00:07:26.162 "zone_management": false, 00:07:26.162 "zone_append": false, 00:07:26.162 "compare": false, 00:07:26.162 "compare_and_write": false, 00:07:26.162 "abort": true, 00:07:26.162 "seek_hole": false, 00:07:26.162 "seek_data": false, 00:07:26.162 "copy": true, 00:07:26.162 "nvme_iov_md": false 00:07:26.162 }, 00:07:26.162 "memory_domains": [ 00:07:26.162 { 00:07:26.162 "dma_device_id": "system", 00:07:26.162 "dma_device_type": 1 00:07:26.162 }, 00:07:26.162 { 00:07:26.162 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.162 "dma_device_type": 2 00:07:26.162 } 00:07:26.162 ], 00:07:26.162 "driver_specific": {} 00:07:26.162 } 00:07:26.162 ] 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.162 "name": "Existed_Raid", 00:07:26.162 "uuid": "a76a31eb-223c-4b0a-b828-482bb4fb8cf7", 00:07:26.162 "strip_size_kb": 64, 00:07:26.162 "state": "online", 00:07:26.162 "raid_level": "raid0", 00:07:26.162 "superblock": false, 00:07:26.162 "num_base_bdevs": 2, 00:07:26.162 "num_base_bdevs_discovered": 2, 00:07:26.162 "num_base_bdevs_operational": 2, 00:07:26.162 "base_bdevs_list": [ 00:07:26.162 { 00:07:26.162 "name": "BaseBdev1", 00:07:26.162 "uuid": "738c61ab-c7cd-4728-b1f0-c29fedd23b0d", 00:07:26.162 "is_configured": true, 00:07:26.162 "data_offset": 0, 00:07:26.162 "data_size": 65536 00:07:26.162 }, 00:07:26.162 { 00:07:26.162 "name": "BaseBdev2", 00:07:26.162 "uuid": "50275d47-2724-4024-9de1-3e578c83892c", 00:07:26.162 "is_configured": true, 00:07:26.162 "data_offset": 0, 00:07:26.162 "data_size": 65536 00:07:26.162 } 00:07:26.162 ] 00:07:26.162 }' 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.162 12:33:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.732 [2024-12-14 12:33:26.198534] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:26.732 "name": "Existed_Raid", 00:07:26.732 "aliases": [ 00:07:26.732 "a76a31eb-223c-4b0a-b828-482bb4fb8cf7" 00:07:26.732 ], 00:07:26.732 "product_name": "Raid Volume", 00:07:26.732 "block_size": 512, 00:07:26.732 "num_blocks": 131072, 00:07:26.732 "uuid": "a76a31eb-223c-4b0a-b828-482bb4fb8cf7", 00:07:26.732 "assigned_rate_limits": { 00:07:26.732 "rw_ios_per_sec": 0, 00:07:26.732 "rw_mbytes_per_sec": 0, 00:07:26.732 "r_mbytes_per_sec": 0, 00:07:26.732 "w_mbytes_per_sec": 0 00:07:26.732 }, 00:07:26.732 "claimed": false, 00:07:26.732 "zoned": false, 00:07:26.732 "supported_io_types": { 00:07:26.732 "read": true, 00:07:26.732 "write": true, 00:07:26.732 "unmap": true, 00:07:26.732 "flush": true, 00:07:26.732 "reset": true, 00:07:26.732 "nvme_admin": false, 00:07:26.732 "nvme_io": false, 00:07:26.732 "nvme_io_md": false, 00:07:26.732 "write_zeroes": true, 00:07:26.732 "zcopy": false, 00:07:26.732 "get_zone_info": false, 00:07:26.732 "zone_management": false, 00:07:26.732 "zone_append": false, 00:07:26.732 "compare": false, 00:07:26.732 "compare_and_write": false, 00:07:26.732 "abort": false, 00:07:26.732 "seek_hole": false, 00:07:26.732 "seek_data": false, 00:07:26.732 "copy": false, 00:07:26.732 "nvme_iov_md": false 00:07:26.732 }, 00:07:26.732 "memory_domains": [ 00:07:26.732 { 00:07:26.732 "dma_device_id": "system", 00:07:26.732 "dma_device_type": 1 00:07:26.732 }, 00:07:26.732 { 00:07:26.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.732 "dma_device_type": 2 00:07:26.732 }, 00:07:26.732 { 00:07:26.732 "dma_device_id": "system", 00:07:26.732 "dma_device_type": 1 00:07:26.732 }, 00:07:26.732 { 00:07:26.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.732 "dma_device_type": 2 00:07:26.732 } 00:07:26.732 ], 00:07:26.732 "driver_specific": { 00:07:26.732 "raid": { 00:07:26.732 "uuid": "a76a31eb-223c-4b0a-b828-482bb4fb8cf7", 00:07:26.732 "strip_size_kb": 64, 00:07:26.732 "state": "online", 00:07:26.732 "raid_level": "raid0", 00:07:26.732 "superblock": false, 00:07:26.732 "num_base_bdevs": 2, 00:07:26.732 "num_base_bdevs_discovered": 2, 00:07:26.732 "num_base_bdevs_operational": 2, 00:07:26.732 "base_bdevs_list": [ 00:07:26.732 { 00:07:26.732 "name": "BaseBdev1", 00:07:26.732 "uuid": "738c61ab-c7cd-4728-b1f0-c29fedd23b0d", 00:07:26.732 "is_configured": true, 00:07:26.732 "data_offset": 0, 00:07:26.732 "data_size": 65536 00:07:26.732 }, 00:07:26.732 { 00:07:26.732 "name": "BaseBdev2", 00:07:26.732 "uuid": "50275d47-2724-4024-9de1-3e578c83892c", 00:07:26.732 "is_configured": true, 00:07:26.732 "data_offset": 0, 00:07:26.732 "data_size": 65536 00:07:26.732 } 00:07:26.732 ] 00:07:26.732 } 00:07:26.732 } 00:07:26.732 }' 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:26.732 BaseBdev2' 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:26.732 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.733 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.733 [2024-12-14 12:33:26.406366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:26.733 [2024-12-14 12:33:26.406529] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:26.733 [2024-12-14 12:33:26.406641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:26.993 "name": "Existed_Raid", 00:07:26.993 "uuid": "a76a31eb-223c-4b0a-b828-482bb4fb8cf7", 00:07:26.993 "strip_size_kb": 64, 00:07:26.993 "state": "offline", 00:07:26.993 "raid_level": "raid0", 00:07:26.993 "superblock": false, 00:07:26.993 "num_base_bdevs": 2, 00:07:26.993 "num_base_bdevs_discovered": 1, 00:07:26.993 "num_base_bdevs_operational": 1, 00:07:26.993 "base_bdevs_list": [ 00:07:26.993 { 00:07:26.993 "name": null, 00:07:26.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.993 "is_configured": false, 00:07:26.993 "data_offset": 0, 00:07:26.993 "data_size": 65536 00:07:26.993 }, 00:07:26.993 { 00:07:26.993 "name": "BaseBdev2", 00:07:26.993 "uuid": "50275d47-2724-4024-9de1-3e578c83892c", 00:07:26.993 "is_configured": true, 00:07:26.993 "data_offset": 0, 00:07:26.993 "data_size": 65536 00:07:26.993 } 00:07:26.993 ] 00:07:26.993 }' 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:26.993 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.253 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:27.253 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.253 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.253 12:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:27.253 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.253 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.253 12:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.512 [2024-12-14 12:33:27.026336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:27.512 [2024-12-14 12:33:27.026519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62520 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62520 ']' 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62520 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62520 00:07:27.512 killing process with pid 62520 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62520' 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62520 00:07:27.512 [2024-12-14 12:33:27.232611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.512 12:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62520 00:07:27.772 [2024-12-14 12:33:27.251036] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:29.153 00:07:29.153 real 0m5.195s 00:07:29.153 user 0m7.277s 00:07:29.153 sys 0m0.901s 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.153 ************************************ 00:07:29.153 END TEST raid_state_function_test 00:07:29.153 ************************************ 00:07:29.153 12:33:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:29.153 12:33:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:29.153 12:33:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.153 12:33:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:29.153 ************************************ 00:07:29.153 START TEST raid_state_function_test_sb 00:07:29.153 ************************************ 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:29.153 Process raid pid: 62773 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62773 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62773' 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62773 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62773 ']' 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.153 12:33:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.153 [2024-12-14 12:33:28.678181] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:29.153 [2024-12-14 12:33:28.678449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:29.153 [2024-12-14 12:33:28.854827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.413 [2024-12-14 12:33:28.996868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.672 [2024-12-14 12:33:29.241603] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.672 [2024-12-14 12:33:29.241661] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:29.932 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.933 [2024-12-14 12:33:29.495223] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:29.933 [2024-12-14 12:33:29.495306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:29.933 [2024-12-14 12:33:29.495320] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:29.933 [2024-12-14 12:33:29.495333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.933 "name": "Existed_Raid", 00:07:29.933 "uuid": "9f0a824b-5742-4254-824f-21c2caed2c0b", 00:07:29.933 "strip_size_kb": 64, 00:07:29.933 "state": "configuring", 00:07:29.933 "raid_level": "raid0", 00:07:29.933 "superblock": true, 00:07:29.933 "num_base_bdevs": 2, 00:07:29.933 "num_base_bdevs_discovered": 0, 00:07:29.933 "num_base_bdevs_operational": 2, 00:07:29.933 "base_bdevs_list": [ 00:07:29.933 { 00:07:29.933 "name": "BaseBdev1", 00:07:29.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.933 "is_configured": false, 00:07:29.933 "data_offset": 0, 00:07:29.933 "data_size": 0 00:07:29.933 }, 00:07:29.933 { 00:07:29.933 "name": "BaseBdev2", 00:07:29.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.933 "is_configured": false, 00:07:29.933 "data_offset": 0, 00:07:29.933 "data_size": 0 00:07:29.933 } 00:07:29.933 ] 00:07:29.933 }' 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.933 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.503 [2024-12-14 12:33:29.954431] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:30.503 [2024-12-14 12:33:29.954604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.503 [2024-12-14 12:33:29.962334] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:30.503 [2024-12-14 12:33:29.962440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:30.503 [2024-12-14 12:33:29.962475] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:30.503 [2024-12-14 12:33:29.962510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.503 12:33:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.503 [2024-12-14 12:33:30.015051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.503 BaseBdev1 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.503 [ 00:07:30.503 { 00:07:30.503 "name": "BaseBdev1", 00:07:30.503 "aliases": [ 00:07:30.503 "809ffa71-c71d-437b-8e17-3350ea4f9097" 00:07:30.503 ], 00:07:30.503 "product_name": "Malloc disk", 00:07:30.503 "block_size": 512, 00:07:30.503 "num_blocks": 65536, 00:07:30.503 "uuid": "809ffa71-c71d-437b-8e17-3350ea4f9097", 00:07:30.503 "assigned_rate_limits": { 00:07:30.503 "rw_ios_per_sec": 0, 00:07:30.503 "rw_mbytes_per_sec": 0, 00:07:30.503 "r_mbytes_per_sec": 0, 00:07:30.503 "w_mbytes_per_sec": 0 00:07:30.503 }, 00:07:30.503 "claimed": true, 00:07:30.503 "claim_type": "exclusive_write", 00:07:30.503 "zoned": false, 00:07:30.503 "supported_io_types": { 00:07:30.503 "read": true, 00:07:30.503 "write": true, 00:07:30.503 "unmap": true, 00:07:30.503 "flush": true, 00:07:30.503 "reset": true, 00:07:30.503 "nvme_admin": false, 00:07:30.503 "nvme_io": false, 00:07:30.503 "nvme_io_md": false, 00:07:30.503 "write_zeroes": true, 00:07:30.503 "zcopy": true, 00:07:30.503 "get_zone_info": false, 00:07:30.503 "zone_management": false, 00:07:30.503 "zone_append": false, 00:07:30.503 "compare": false, 00:07:30.503 "compare_and_write": false, 00:07:30.503 "abort": true, 00:07:30.503 "seek_hole": false, 00:07:30.503 "seek_data": false, 00:07:30.503 "copy": true, 00:07:30.503 "nvme_iov_md": false 00:07:30.503 }, 00:07:30.503 "memory_domains": [ 00:07:30.503 { 00:07:30.503 "dma_device_id": "system", 00:07:30.503 "dma_device_type": 1 00:07:30.503 }, 00:07:30.503 { 00:07:30.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:30.503 "dma_device_type": 2 00:07:30.503 } 00:07:30.503 ], 00:07:30.503 "driver_specific": {} 00:07:30.503 } 00:07:30.503 ] 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:30.503 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:30.504 "name": "Existed_Raid", 00:07:30.504 "uuid": "9eec1a0f-e50a-410a-a755-466263d2df45", 00:07:30.504 "strip_size_kb": 64, 00:07:30.504 "state": "configuring", 00:07:30.504 "raid_level": "raid0", 00:07:30.504 "superblock": true, 00:07:30.504 "num_base_bdevs": 2, 00:07:30.504 "num_base_bdevs_discovered": 1, 00:07:30.504 "num_base_bdevs_operational": 2, 00:07:30.504 "base_bdevs_list": [ 00:07:30.504 { 00:07:30.504 "name": "BaseBdev1", 00:07:30.504 "uuid": "809ffa71-c71d-437b-8e17-3350ea4f9097", 00:07:30.504 "is_configured": true, 00:07:30.504 "data_offset": 2048, 00:07:30.504 "data_size": 63488 00:07:30.504 }, 00:07:30.504 { 00:07:30.504 "name": "BaseBdev2", 00:07:30.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:30.504 "is_configured": false, 00:07:30.504 "data_offset": 0, 00:07:30.504 "data_size": 0 00:07:30.504 } 00:07:30.504 ] 00:07:30.504 }' 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:30.504 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.073 [2024-12-14 12:33:30.514270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:31.073 [2024-12-14 12:33:30.514470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.073 [2024-12-14 12:33:30.526338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:31.073 [2024-12-14 12:33:30.528636] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:31.073 [2024-12-14 12:33:30.528705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.073 "name": "Existed_Raid", 00:07:31.073 "uuid": "33a9d5ca-a10f-4789-a7ad-a7e2b1a92be8", 00:07:31.073 "strip_size_kb": 64, 00:07:31.073 "state": "configuring", 00:07:31.073 "raid_level": "raid0", 00:07:31.073 "superblock": true, 00:07:31.073 "num_base_bdevs": 2, 00:07:31.073 "num_base_bdevs_discovered": 1, 00:07:31.073 "num_base_bdevs_operational": 2, 00:07:31.073 "base_bdevs_list": [ 00:07:31.073 { 00:07:31.073 "name": "BaseBdev1", 00:07:31.073 "uuid": "809ffa71-c71d-437b-8e17-3350ea4f9097", 00:07:31.073 "is_configured": true, 00:07:31.073 "data_offset": 2048, 00:07:31.073 "data_size": 63488 00:07:31.073 }, 00:07:31.073 { 00:07:31.073 "name": "BaseBdev2", 00:07:31.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:31.073 "is_configured": false, 00:07:31.073 "data_offset": 0, 00:07:31.073 "data_size": 0 00:07:31.073 } 00:07:31.073 ] 00:07:31.073 }' 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.073 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.333 12:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:31.333 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.333 12:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.333 [2024-12-14 12:33:31.005984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:31.333 [2024-12-14 12:33:31.006392] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:31.333 [2024-12-14 12:33:31.006448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:31.333 [2024-12-14 12:33:31.006829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:31.333 BaseBdev2 00:07:31.333 [2024-12-14 12:33:31.007059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:31.333 [2024-12-14 12:33:31.007077] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:31.333 [2024-12-14 12:33:31.007234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.333 [ 00:07:31.333 { 00:07:31.333 "name": "BaseBdev2", 00:07:31.333 "aliases": [ 00:07:31.333 "97d70806-a7f3-4c15-9cd0-0252b007a4d4" 00:07:31.333 ], 00:07:31.333 "product_name": "Malloc disk", 00:07:31.333 "block_size": 512, 00:07:31.333 "num_blocks": 65536, 00:07:31.333 "uuid": "97d70806-a7f3-4c15-9cd0-0252b007a4d4", 00:07:31.333 "assigned_rate_limits": { 00:07:31.333 "rw_ios_per_sec": 0, 00:07:31.333 "rw_mbytes_per_sec": 0, 00:07:31.333 "r_mbytes_per_sec": 0, 00:07:31.333 "w_mbytes_per_sec": 0 00:07:31.333 }, 00:07:31.333 "claimed": true, 00:07:31.333 "claim_type": "exclusive_write", 00:07:31.333 "zoned": false, 00:07:31.333 "supported_io_types": { 00:07:31.333 "read": true, 00:07:31.333 "write": true, 00:07:31.333 "unmap": true, 00:07:31.333 "flush": true, 00:07:31.333 "reset": true, 00:07:31.333 "nvme_admin": false, 00:07:31.333 "nvme_io": false, 00:07:31.333 "nvme_io_md": false, 00:07:31.333 "write_zeroes": true, 00:07:31.333 "zcopy": true, 00:07:31.333 "get_zone_info": false, 00:07:31.333 "zone_management": false, 00:07:31.333 "zone_append": false, 00:07:31.333 "compare": false, 00:07:31.333 "compare_and_write": false, 00:07:31.333 "abort": true, 00:07:31.333 "seek_hole": false, 00:07:31.333 "seek_data": false, 00:07:31.333 "copy": true, 00:07:31.333 "nvme_iov_md": false 00:07:31.333 }, 00:07:31.333 "memory_domains": [ 00:07:31.333 { 00:07:31.333 "dma_device_id": "system", 00:07:31.333 "dma_device_type": 1 00:07:31.333 }, 00:07:31.333 { 00:07:31.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.333 "dma_device_type": 2 00:07:31.333 } 00:07:31.333 ], 00:07:31.333 "driver_specific": {} 00:07:31.333 } 00:07:31.333 ] 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.333 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.593 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.593 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:31.593 "name": "Existed_Raid", 00:07:31.593 "uuid": "33a9d5ca-a10f-4789-a7ad-a7e2b1a92be8", 00:07:31.593 "strip_size_kb": 64, 00:07:31.593 "state": "online", 00:07:31.593 "raid_level": "raid0", 00:07:31.593 "superblock": true, 00:07:31.593 "num_base_bdevs": 2, 00:07:31.593 "num_base_bdevs_discovered": 2, 00:07:31.593 "num_base_bdevs_operational": 2, 00:07:31.593 "base_bdevs_list": [ 00:07:31.593 { 00:07:31.593 "name": "BaseBdev1", 00:07:31.593 "uuid": "809ffa71-c71d-437b-8e17-3350ea4f9097", 00:07:31.593 "is_configured": true, 00:07:31.593 "data_offset": 2048, 00:07:31.593 "data_size": 63488 00:07:31.593 }, 00:07:31.593 { 00:07:31.593 "name": "BaseBdev2", 00:07:31.593 "uuid": "97d70806-a7f3-4c15-9cd0-0252b007a4d4", 00:07:31.593 "is_configured": true, 00:07:31.593 "data_offset": 2048, 00:07:31.593 "data_size": 63488 00:07:31.593 } 00:07:31.593 ] 00:07:31.593 }' 00:07:31.593 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:31.593 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.878 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:31.878 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:31.878 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.879 [2024-12-14 12:33:31.449548] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:31.879 "name": "Existed_Raid", 00:07:31.879 "aliases": [ 00:07:31.879 "33a9d5ca-a10f-4789-a7ad-a7e2b1a92be8" 00:07:31.879 ], 00:07:31.879 "product_name": "Raid Volume", 00:07:31.879 "block_size": 512, 00:07:31.879 "num_blocks": 126976, 00:07:31.879 "uuid": "33a9d5ca-a10f-4789-a7ad-a7e2b1a92be8", 00:07:31.879 "assigned_rate_limits": { 00:07:31.879 "rw_ios_per_sec": 0, 00:07:31.879 "rw_mbytes_per_sec": 0, 00:07:31.879 "r_mbytes_per_sec": 0, 00:07:31.879 "w_mbytes_per_sec": 0 00:07:31.879 }, 00:07:31.879 "claimed": false, 00:07:31.879 "zoned": false, 00:07:31.879 "supported_io_types": { 00:07:31.879 "read": true, 00:07:31.879 "write": true, 00:07:31.879 "unmap": true, 00:07:31.879 "flush": true, 00:07:31.879 "reset": true, 00:07:31.879 "nvme_admin": false, 00:07:31.879 "nvme_io": false, 00:07:31.879 "nvme_io_md": false, 00:07:31.879 "write_zeroes": true, 00:07:31.879 "zcopy": false, 00:07:31.879 "get_zone_info": false, 00:07:31.879 "zone_management": false, 00:07:31.879 "zone_append": false, 00:07:31.879 "compare": false, 00:07:31.879 "compare_and_write": false, 00:07:31.879 "abort": false, 00:07:31.879 "seek_hole": false, 00:07:31.879 "seek_data": false, 00:07:31.879 "copy": false, 00:07:31.879 "nvme_iov_md": false 00:07:31.879 }, 00:07:31.879 "memory_domains": [ 00:07:31.879 { 00:07:31.879 "dma_device_id": "system", 00:07:31.879 "dma_device_type": 1 00:07:31.879 }, 00:07:31.879 { 00:07:31.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.879 "dma_device_type": 2 00:07:31.879 }, 00:07:31.879 { 00:07:31.879 "dma_device_id": "system", 00:07:31.879 "dma_device_type": 1 00:07:31.879 }, 00:07:31.879 { 00:07:31.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:31.879 "dma_device_type": 2 00:07:31.879 } 00:07:31.879 ], 00:07:31.879 "driver_specific": { 00:07:31.879 "raid": { 00:07:31.879 "uuid": "33a9d5ca-a10f-4789-a7ad-a7e2b1a92be8", 00:07:31.879 "strip_size_kb": 64, 00:07:31.879 "state": "online", 00:07:31.879 "raid_level": "raid0", 00:07:31.879 "superblock": true, 00:07:31.879 "num_base_bdevs": 2, 00:07:31.879 "num_base_bdevs_discovered": 2, 00:07:31.879 "num_base_bdevs_operational": 2, 00:07:31.879 "base_bdevs_list": [ 00:07:31.879 { 00:07:31.879 "name": "BaseBdev1", 00:07:31.879 "uuid": "809ffa71-c71d-437b-8e17-3350ea4f9097", 00:07:31.879 "is_configured": true, 00:07:31.879 "data_offset": 2048, 00:07:31.879 "data_size": 63488 00:07:31.879 }, 00:07:31.879 { 00:07:31.879 "name": "BaseBdev2", 00:07:31.879 "uuid": "97d70806-a7f3-4c15-9cd0-0252b007a4d4", 00:07:31.879 "is_configured": true, 00:07:31.879 "data_offset": 2048, 00:07:31.879 "data_size": 63488 00:07:31.879 } 00:07:31.879 ] 00:07:31.879 } 00:07:31.879 } 00:07:31.879 }' 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:31.879 BaseBdev2' 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:31.879 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.170 [2024-12-14 12:33:31.668958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:32.170 [2024-12-14 12:33:31.668995] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.170 [2024-12-14 12:33:31.669065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.170 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.170 "name": "Existed_Raid", 00:07:32.170 "uuid": "33a9d5ca-a10f-4789-a7ad-a7e2b1a92be8", 00:07:32.170 "strip_size_kb": 64, 00:07:32.170 "state": "offline", 00:07:32.170 "raid_level": "raid0", 00:07:32.170 "superblock": true, 00:07:32.170 "num_base_bdevs": 2, 00:07:32.170 "num_base_bdevs_discovered": 1, 00:07:32.170 "num_base_bdevs_operational": 1, 00:07:32.170 "base_bdevs_list": [ 00:07:32.170 { 00:07:32.170 "name": null, 00:07:32.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:32.171 "is_configured": false, 00:07:32.171 "data_offset": 0, 00:07:32.171 "data_size": 63488 00:07:32.171 }, 00:07:32.171 { 00:07:32.171 "name": "BaseBdev2", 00:07:32.171 "uuid": "97d70806-a7f3-4c15-9cd0-0252b007a4d4", 00:07:32.171 "is_configured": true, 00:07:32.171 "data_offset": 2048, 00:07:32.171 "data_size": 63488 00:07:32.171 } 00:07:32.171 ] 00:07:32.171 }' 00:07:32.171 12:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.171 12:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.741 [2024-12-14 12:33:32.274231] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:32.741 [2024-12-14 12:33:32.274353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62773 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62773 ']' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62773 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62773 00:07:32.741 killing process with pid 62773 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62773' 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62773 00:07:32.741 [2024-12-14 12:33:32.465478] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:32.741 12:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62773 00:07:33.001 [2024-12-14 12:33:32.483541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.941 12:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:33.941 00:07:33.941 real 0m5.016s 00:07:33.941 user 0m7.135s 00:07:33.941 sys 0m0.919s 00:07:33.941 12:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:33.941 12:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:33.941 ************************************ 00:07:33.941 END TEST raid_state_function_test_sb 00:07:33.941 ************************************ 00:07:33.941 12:33:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:33.941 12:33:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:33.941 12:33:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.941 12:33:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.941 ************************************ 00:07:33.941 START TEST raid_superblock_test 00:07:33.941 ************************************ 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63025 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63025 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63025 ']' 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.941 12:33:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.201 [2024-12-14 12:33:33.746988] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:34.201 [2024-12-14 12:33:33.747208] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63025 ] 00:07:34.201 [2024-12-14 12:33:33.920778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.461 [2024-12-14 12:33:34.029466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.720 [2024-12-14 12:33:34.224149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.720 [2024-12-14 12:33:34.224285] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.979 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.980 malloc1 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.980 [2024-12-14 12:33:34.656476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:34.980 [2024-12-14 12:33:34.656611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.980 [2024-12-14 12:33:34.656653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:34.980 [2024-12-14 12:33:34.656681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.980 [2024-12-14 12:33:34.658870] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.980 [2024-12-14 12:33:34.658958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:34.980 pt1 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.980 malloc2 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.980 [2024-12-14 12:33:34.709109] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:34.980 [2024-12-14 12:33:34.709200] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:34.980 [2024-12-14 12:33:34.709237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:34.980 [2024-12-14 12:33:34.709264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:34.980 [2024-12-14 12:33:34.711376] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:34.980 [2024-12-14 12:33:34.711450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:34.980 pt2 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:34.980 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.238 [2024-12-14 12:33:34.721146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:35.238 [2024-12-14 12:33:34.722987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:35.238 [2024-12-14 12:33:34.723222] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:35.238 [2024-12-14 12:33:34.723269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:35.238 [2024-12-14 12:33:34.723531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:35.238 [2024-12-14 12:33:34.723721] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:35.238 [2024-12-14 12:33:34.723764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:35.238 [2024-12-14 12:33:34.723918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.238 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.239 "name": "raid_bdev1", 00:07:35.239 "uuid": "e106e078-12c8-4231-8262-ccc1033596f8", 00:07:35.239 "strip_size_kb": 64, 00:07:35.239 "state": "online", 00:07:35.239 "raid_level": "raid0", 00:07:35.239 "superblock": true, 00:07:35.239 "num_base_bdevs": 2, 00:07:35.239 "num_base_bdevs_discovered": 2, 00:07:35.239 "num_base_bdevs_operational": 2, 00:07:35.239 "base_bdevs_list": [ 00:07:35.239 { 00:07:35.239 "name": "pt1", 00:07:35.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:35.239 "is_configured": true, 00:07:35.239 "data_offset": 2048, 00:07:35.239 "data_size": 63488 00:07:35.239 }, 00:07:35.239 { 00:07:35.239 "name": "pt2", 00:07:35.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:35.239 "is_configured": true, 00:07:35.239 "data_offset": 2048, 00:07:35.239 "data_size": 63488 00:07:35.239 } 00:07:35.239 ] 00:07:35.239 }' 00:07:35.239 12:33:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.239 12:33:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.498 [2024-12-14 12:33:35.196569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.498 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:35.498 "name": "raid_bdev1", 00:07:35.498 "aliases": [ 00:07:35.498 "e106e078-12c8-4231-8262-ccc1033596f8" 00:07:35.498 ], 00:07:35.498 "product_name": "Raid Volume", 00:07:35.498 "block_size": 512, 00:07:35.498 "num_blocks": 126976, 00:07:35.498 "uuid": "e106e078-12c8-4231-8262-ccc1033596f8", 00:07:35.498 "assigned_rate_limits": { 00:07:35.498 "rw_ios_per_sec": 0, 00:07:35.498 "rw_mbytes_per_sec": 0, 00:07:35.498 "r_mbytes_per_sec": 0, 00:07:35.498 "w_mbytes_per_sec": 0 00:07:35.498 }, 00:07:35.498 "claimed": false, 00:07:35.498 "zoned": false, 00:07:35.498 "supported_io_types": { 00:07:35.498 "read": true, 00:07:35.498 "write": true, 00:07:35.498 "unmap": true, 00:07:35.498 "flush": true, 00:07:35.498 "reset": true, 00:07:35.498 "nvme_admin": false, 00:07:35.498 "nvme_io": false, 00:07:35.498 "nvme_io_md": false, 00:07:35.498 "write_zeroes": true, 00:07:35.498 "zcopy": false, 00:07:35.498 "get_zone_info": false, 00:07:35.498 "zone_management": false, 00:07:35.498 "zone_append": false, 00:07:35.498 "compare": false, 00:07:35.498 "compare_and_write": false, 00:07:35.498 "abort": false, 00:07:35.498 "seek_hole": false, 00:07:35.498 "seek_data": false, 00:07:35.498 "copy": false, 00:07:35.498 "nvme_iov_md": false 00:07:35.498 }, 00:07:35.498 "memory_domains": [ 00:07:35.498 { 00:07:35.498 "dma_device_id": "system", 00:07:35.498 "dma_device_type": 1 00:07:35.498 }, 00:07:35.498 { 00:07:35.498 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.498 "dma_device_type": 2 00:07:35.498 }, 00:07:35.498 { 00:07:35.498 "dma_device_id": "system", 00:07:35.498 "dma_device_type": 1 00:07:35.498 }, 00:07:35.499 { 00:07:35.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:35.499 "dma_device_type": 2 00:07:35.499 } 00:07:35.499 ], 00:07:35.499 "driver_specific": { 00:07:35.499 "raid": { 00:07:35.499 "uuid": "e106e078-12c8-4231-8262-ccc1033596f8", 00:07:35.499 "strip_size_kb": 64, 00:07:35.499 "state": "online", 00:07:35.499 "raid_level": "raid0", 00:07:35.499 "superblock": true, 00:07:35.499 "num_base_bdevs": 2, 00:07:35.499 "num_base_bdevs_discovered": 2, 00:07:35.499 "num_base_bdevs_operational": 2, 00:07:35.499 "base_bdevs_list": [ 00:07:35.499 { 00:07:35.499 "name": "pt1", 00:07:35.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:35.499 "is_configured": true, 00:07:35.499 "data_offset": 2048, 00:07:35.499 "data_size": 63488 00:07:35.499 }, 00:07:35.499 { 00:07:35.499 "name": "pt2", 00:07:35.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:35.499 "is_configured": true, 00:07:35.499 "data_offset": 2048, 00:07:35.499 "data_size": 63488 00:07:35.499 } 00:07:35.499 ] 00:07:35.499 } 00:07:35.499 } 00:07:35.499 }' 00:07:35.499 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:35.758 pt2' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.758 [2024-12-14 12:33:35.408332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e106e078-12c8-4231-8262-ccc1033596f8 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e106e078-12c8-4231-8262-ccc1033596f8 ']' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.758 [2024-12-14 12:33:35.455931] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.758 [2024-12-14 12:33:35.456008] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.758 [2024-12-14 12:33:35.456134] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.758 [2024-12-14 12:33:35.456224] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.758 [2024-12-14 12:33:35.456244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.758 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.019 [2024-12-14 12:33:35.591726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:36.019 [2024-12-14 12:33:35.593581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:36.019 [2024-12-14 12:33:35.593652] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:36.019 [2024-12-14 12:33:35.593708] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:36.019 [2024-12-14 12:33:35.593723] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:36.019 [2024-12-14 12:33:35.593737] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:36.019 request: 00:07:36.019 { 00:07:36.019 "name": "raid_bdev1", 00:07:36.019 "raid_level": "raid0", 00:07:36.019 "base_bdevs": [ 00:07:36.019 "malloc1", 00:07:36.019 "malloc2" 00:07:36.019 ], 00:07:36.019 "strip_size_kb": 64, 00:07:36.019 "superblock": false, 00:07:36.019 "method": "bdev_raid_create", 00:07:36.019 "req_id": 1 00:07:36.019 } 00:07:36.019 Got JSON-RPC error response 00:07:36.019 response: 00:07:36.019 { 00:07:36.019 "code": -17, 00:07:36.019 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:36.019 } 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.019 [2024-12-14 12:33:35.651604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:36.019 [2024-12-14 12:33:35.651747] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.019 [2024-12-14 12:33:35.651782] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:36.019 [2024-12-14 12:33:35.651820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.019 [2024-12-14 12:33:35.654219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.019 [2024-12-14 12:33:35.654294] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:36.019 [2024-12-14 12:33:35.654410] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:36.019 [2024-12-14 12:33:35.654496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:36.019 pt1 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.019 "name": "raid_bdev1", 00:07:36.019 "uuid": "e106e078-12c8-4231-8262-ccc1033596f8", 00:07:36.019 "strip_size_kb": 64, 00:07:36.019 "state": "configuring", 00:07:36.019 "raid_level": "raid0", 00:07:36.019 "superblock": true, 00:07:36.019 "num_base_bdevs": 2, 00:07:36.019 "num_base_bdevs_discovered": 1, 00:07:36.019 "num_base_bdevs_operational": 2, 00:07:36.019 "base_bdevs_list": [ 00:07:36.019 { 00:07:36.019 "name": "pt1", 00:07:36.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.019 "is_configured": true, 00:07:36.019 "data_offset": 2048, 00:07:36.019 "data_size": 63488 00:07:36.019 }, 00:07:36.019 { 00:07:36.019 "name": null, 00:07:36.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.019 "is_configured": false, 00:07:36.019 "data_offset": 2048, 00:07:36.019 "data_size": 63488 00:07:36.019 } 00:07:36.019 ] 00:07:36.019 }' 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.019 12:33:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.589 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:36.589 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:36.589 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:36.589 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:36.589 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.589 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.589 [2024-12-14 12:33:36.126852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:36.589 [2024-12-14 12:33:36.126929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.589 [2024-12-14 12:33:36.126952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:36.589 [2024-12-14 12:33:36.126963] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.589 [2024-12-14 12:33:36.127455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.589 [2024-12-14 12:33:36.127531] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:36.589 [2024-12-14 12:33:36.127625] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:36.589 [2024-12-14 12:33:36.127655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:36.589 [2024-12-14 12:33:36.127773] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:36.589 [2024-12-14 12:33:36.127784] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:36.589 [2024-12-14 12:33:36.128024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:36.589 [2024-12-14 12:33:36.128181] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:36.589 [2024-12-14 12:33:36.128190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:36.589 [2024-12-14 12:33:36.128327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:36.589 pt2 00:07:36.589 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.590 "name": "raid_bdev1", 00:07:36.590 "uuid": "e106e078-12c8-4231-8262-ccc1033596f8", 00:07:36.590 "strip_size_kb": 64, 00:07:36.590 "state": "online", 00:07:36.590 "raid_level": "raid0", 00:07:36.590 "superblock": true, 00:07:36.590 "num_base_bdevs": 2, 00:07:36.590 "num_base_bdevs_discovered": 2, 00:07:36.590 "num_base_bdevs_operational": 2, 00:07:36.590 "base_bdevs_list": [ 00:07:36.590 { 00:07:36.590 "name": "pt1", 00:07:36.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:36.590 "is_configured": true, 00:07:36.590 "data_offset": 2048, 00:07:36.590 "data_size": 63488 00:07:36.590 }, 00:07:36.590 { 00:07:36.590 "name": "pt2", 00:07:36.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:36.590 "is_configured": true, 00:07:36.590 "data_offset": 2048, 00:07:36.590 "data_size": 63488 00:07:36.590 } 00:07:36.590 ] 00:07:36.590 }' 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.590 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.849 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:36.849 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:36.849 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:36.849 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:36.849 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:36.849 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:36.849 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:36.849 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.850 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.850 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:36.850 [2024-12-14 12:33:36.562421] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:36.850 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.109 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.109 "name": "raid_bdev1", 00:07:37.109 "aliases": [ 00:07:37.109 "e106e078-12c8-4231-8262-ccc1033596f8" 00:07:37.109 ], 00:07:37.109 "product_name": "Raid Volume", 00:07:37.110 "block_size": 512, 00:07:37.110 "num_blocks": 126976, 00:07:37.110 "uuid": "e106e078-12c8-4231-8262-ccc1033596f8", 00:07:37.110 "assigned_rate_limits": { 00:07:37.110 "rw_ios_per_sec": 0, 00:07:37.110 "rw_mbytes_per_sec": 0, 00:07:37.110 "r_mbytes_per_sec": 0, 00:07:37.110 "w_mbytes_per_sec": 0 00:07:37.110 }, 00:07:37.110 "claimed": false, 00:07:37.110 "zoned": false, 00:07:37.110 "supported_io_types": { 00:07:37.110 "read": true, 00:07:37.110 "write": true, 00:07:37.110 "unmap": true, 00:07:37.110 "flush": true, 00:07:37.110 "reset": true, 00:07:37.110 "nvme_admin": false, 00:07:37.110 "nvme_io": false, 00:07:37.110 "nvme_io_md": false, 00:07:37.110 "write_zeroes": true, 00:07:37.110 "zcopy": false, 00:07:37.110 "get_zone_info": false, 00:07:37.110 "zone_management": false, 00:07:37.110 "zone_append": false, 00:07:37.110 "compare": false, 00:07:37.110 "compare_and_write": false, 00:07:37.110 "abort": false, 00:07:37.110 "seek_hole": false, 00:07:37.110 "seek_data": false, 00:07:37.110 "copy": false, 00:07:37.110 "nvme_iov_md": false 00:07:37.110 }, 00:07:37.110 "memory_domains": [ 00:07:37.110 { 00:07:37.110 "dma_device_id": "system", 00:07:37.110 "dma_device_type": 1 00:07:37.110 }, 00:07:37.110 { 00:07:37.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.110 "dma_device_type": 2 00:07:37.110 }, 00:07:37.110 { 00:07:37.110 "dma_device_id": "system", 00:07:37.110 "dma_device_type": 1 00:07:37.110 }, 00:07:37.110 { 00:07:37.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.110 "dma_device_type": 2 00:07:37.110 } 00:07:37.110 ], 00:07:37.110 "driver_specific": { 00:07:37.110 "raid": { 00:07:37.110 "uuid": "e106e078-12c8-4231-8262-ccc1033596f8", 00:07:37.110 "strip_size_kb": 64, 00:07:37.110 "state": "online", 00:07:37.110 "raid_level": "raid0", 00:07:37.110 "superblock": true, 00:07:37.110 "num_base_bdevs": 2, 00:07:37.110 "num_base_bdevs_discovered": 2, 00:07:37.110 "num_base_bdevs_operational": 2, 00:07:37.110 "base_bdevs_list": [ 00:07:37.110 { 00:07:37.110 "name": "pt1", 00:07:37.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.110 "is_configured": true, 00:07:37.110 "data_offset": 2048, 00:07:37.110 "data_size": 63488 00:07:37.110 }, 00:07:37.110 { 00:07:37.110 "name": "pt2", 00:07:37.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.110 "is_configured": true, 00:07:37.110 "data_offset": 2048, 00:07:37.110 "data_size": 63488 00:07:37.110 } 00:07:37.110 ] 00:07:37.110 } 00:07:37.110 } 00:07:37.110 }' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:37.110 pt2' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:37.110 [2024-12-14 12:33:36.813854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.110 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e106e078-12c8-4231-8262-ccc1033596f8 '!=' e106e078-12c8-4231-8262-ccc1033596f8 ']' 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63025 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63025 ']' 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63025 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63025 00:07:37.370 killing process with pid 63025 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63025' 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63025 00:07:37.370 [2024-12-14 12:33:36.900361] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:37.370 [2024-12-14 12:33:36.900443] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:37.370 [2024-12-14 12:33:36.900493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:37.370 [2024-12-14 12:33:36.900504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:37.370 12:33:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63025 00:07:37.370 [2024-12-14 12:33:37.103458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:38.749 12:33:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:38.749 00:07:38.749 real 0m4.555s 00:07:38.749 user 0m6.438s 00:07:38.749 sys 0m0.755s 00:07:38.749 ************************************ 00:07:38.749 END TEST raid_superblock_test 00:07:38.749 ************************************ 00:07:38.749 12:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.749 12:33:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.749 12:33:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:38.749 12:33:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:38.749 12:33:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.749 12:33:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:38.749 ************************************ 00:07:38.749 START TEST raid_read_error_test 00:07:38.749 ************************************ 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:38.749 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tdSBOf5NJM 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63237 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63237 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63237 ']' 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.750 12:33:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.750 [2024-12-14 12:33:38.375060] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:38.750 [2024-12-14 12:33:38.375188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63237 ] 00:07:39.009 [2024-12-14 12:33:38.548953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.009 [2024-12-14 12:33:38.662514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.268 [2024-12-14 12:33:38.864725] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.268 [2024-12-14 12:33:38.864787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.528 BaseBdev1_malloc 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.528 true 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.528 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.528 [2024-12-14 12:33:39.256928] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:39.528 [2024-12-14 12:33:39.256984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.529 [2024-12-14 12:33:39.257003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:39.529 [2024-12-14 12:33:39.257013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.529 [2024-12-14 12:33:39.259080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.529 [2024-12-14 12:33:39.259120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:39.529 BaseBdev1 00:07:39.529 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.529 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:39.529 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:39.529 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.529 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.788 BaseBdev2_malloc 00:07:39.788 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.788 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:39.788 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.788 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.788 true 00:07:39.788 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.789 [2024-12-14 12:33:39.323074] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:39.789 [2024-12-14 12:33:39.323132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:39.789 [2024-12-14 12:33:39.323151] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:39.789 [2024-12-14 12:33:39.323161] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:39.789 [2024-12-14 12:33:39.325228] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:39.789 [2024-12-14 12:33:39.325345] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:39.789 BaseBdev2 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.789 [2024-12-14 12:33:39.335101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:39.789 [2024-12-14 12:33:39.336839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:39.789 [2024-12-14 12:33:39.337036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:39.789 [2024-12-14 12:33:39.337080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:39.789 [2024-12-14 12:33:39.337326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:39.789 [2024-12-14 12:33:39.337508] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:39.789 [2024-12-14 12:33:39.337526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:39.789 [2024-12-14 12:33:39.337685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:39.789 "name": "raid_bdev1", 00:07:39.789 "uuid": "50fa349a-4913-426a-b675-451e3364c1aa", 00:07:39.789 "strip_size_kb": 64, 00:07:39.789 "state": "online", 00:07:39.789 "raid_level": "raid0", 00:07:39.789 "superblock": true, 00:07:39.789 "num_base_bdevs": 2, 00:07:39.789 "num_base_bdevs_discovered": 2, 00:07:39.789 "num_base_bdevs_operational": 2, 00:07:39.789 "base_bdevs_list": [ 00:07:39.789 { 00:07:39.789 "name": "BaseBdev1", 00:07:39.789 "uuid": "6a665d62-e6a2-5c24-8cb2-68f857fdbc11", 00:07:39.789 "is_configured": true, 00:07:39.789 "data_offset": 2048, 00:07:39.789 "data_size": 63488 00:07:39.789 }, 00:07:39.789 { 00:07:39.789 "name": "BaseBdev2", 00:07:39.789 "uuid": "8b805589-9335-5938-8e83-ce5e2ef63ffc", 00:07:39.789 "is_configured": true, 00:07:39.789 "data_offset": 2048, 00:07:39.789 "data_size": 63488 00:07:39.789 } 00:07:39.789 ] 00:07:39.789 }' 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:39.789 12:33:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.049 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:40.049 12:33:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:40.308 [2024-12-14 12:33:39.867524] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.247 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.247 "name": "raid_bdev1", 00:07:41.247 "uuid": "50fa349a-4913-426a-b675-451e3364c1aa", 00:07:41.248 "strip_size_kb": 64, 00:07:41.248 "state": "online", 00:07:41.248 "raid_level": "raid0", 00:07:41.248 "superblock": true, 00:07:41.248 "num_base_bdevs": 2, 00:07:41.248 "num_base_bdevs_discovered": 2, 00:07:41.248 "num_base_bdevs_operational": 2, 00:07:41.248 "base_bdevs_list": [ 00:07:41.248 { 00:07:41.248 "name": "BaseBdev1", 00:07:41.248 "uuid": "6a665d62-e6a2-5c24-8cb2-68f857fdbc11", 00:07:41.248 "is_configured": true, 00:07:41.248 "data_offset": 2048, 00:07:41.248 "data_size": 63488 00:07:41.248 }, 00:07:41.248 { 00:07:41.248 "name": "BaseBdev2", 00:07:41.248 "uuid": "8b805589-9335-5938-8e83-ce5e2ef63ffc", 00:07:41.248 "is_configured": true, 00:07:41.248 "data_offset": 2048, 00:07:41.248 "data_size": 63488 00:07:41.248 } 00:07:41.248 ] 00:07:41.248 }' 00:07:41.248 12:33:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.248 12:33:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.510 [2024-12-14 12:33:41.207247] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:41.510 [2024-12-14 12:33:41.207369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.510 [2024-12-14 12:33:41.210215] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.510 [2024-12-14 12:33:41.210301] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.510 [2024-12-14 12:33:41.210355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:41.510 [2024-12-14 12:33:41.210399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:41.510 { 00:07:41.510 "results": [ 00:07:41.510 { 00:07:41.510 "job": "raid_bdev1", 00:07:41.510 "core_mask": "0x1", 00:07:41.510 "workload": "randrw", 00:07:41.510 "percentage": 50, 00:07:41.510 "status": "finished", 00:07:41.510 "queue_depth": 1, 00:07:41.510 "io_size": 131072, 00:07:41.510 "runtime": 1.340785, 00:07:41.510 "iops": 16101.015449904347, 00:07:41.510 "mibps": 2012.6269312380434, 00:07:41.510 "io_failed": 1, 00:07:41.510 "io_timeout": 0, 00:07:41.510 "avg_latency_us": 85.97288098156083, 00:07:41.510 "min_latency_us": 25.041048034934498, 00:07:41.510 "max_latency_us": 1366.5257641921398 00:07:41.510 } 00:07:41.510 ], 00:07:41.510 "core_count": 1 00:07:41.510 } 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63237 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63237 ']' 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63237 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.510 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63237 00:07:41.770 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.770 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.770 killing process with pid 63237 00:07:41.770 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63237' 00:07:41.770 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63237 00:07:41.770 [2024-12-14 12:33:41.257723] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:41.770 12:33:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63237 00:07:41.770 [2024-12-14 12:33:41.395829] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tdSBOf5NJM 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:43.151 00:07:43.151 real 0m4.283s 00:07:43.151 user 0m5.101s 00:07:43.151 sys 0m0.544s 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.151 ************************************ 00:07:43.151 END TEST raid_read_error_test 00:07:43.151 ************************************ 00:07:43.151 12:33:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.151 12:33:42 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:43.151 12:33:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:43.151 12:33:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.151 12:33:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:43.151 ************************************ 00:07:43.151 START TEST raid_write_error_test 00:07:43.151 ************************************ 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Xs2cKZIsTM 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63377 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63377 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:43.151 12:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63377 ']' 00:07:43.152 12:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.152 12:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.152 12:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.152 12:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.152 12:33:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.152 [2024-12-14 12:33:42.726210] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:43.152 [2024-12-14 12:33:42.726326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63377 ] 00:07:43.152 [2024-12-14 12:33:42.883178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.411 [2024-12-14 12:33:42.995756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.671 [2024-12-14 12:33:43.196949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.671 [2024-12-14 12:33:43.197007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.931 BaseBdev1_malloc 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.931 true 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.931 [2024-12-14 12:33:43.605227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:43.931 [2024-12-14 12:33:43.605282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:43.931 [2024-12-14 12:33:43.605302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:43.931 [2024-12-14 12:33:43.605312] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:43.931 [2024-12-14 12:33:43.607315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:43.931 [2024-12-14 12:33:43.607424] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:43.931 BaseBdev1 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:43.931 BaseBdev2_malloc 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.931 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.191 true 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.191 [2024-12-14 12:33:43.672549] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:44.191 [2024-12-14 12:33:43.672606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.191 [2024-12-14 12:33:43.672623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:44.191 [2024-12-14 12:33:43.672634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.191 [2024-12-14 12:33:43.674877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.191 [2024-12-14 12:33:43.674978] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:44.191 BaseBdev2 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.191 [2024-12-14 12:33:43.684587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:44.191 [2024-12-14 12:33:43.686562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:44.191 [2024-12-14 12:33:43.686781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:44.191 [2024-12-14 12:33:43.686799] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:44.191 [2024-12-14 12:33:43.687029] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:44.191 [2024-12-14 12:33:43.687256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:44.191 [2024-12-14 12:33:43.687276] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:44.191 [2024-12-14 12:33:43.687451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:44.191 "name": "raid_bdev1", 00:07:44.191 "uuid": "6f796ed1-f39e-4a2b-af14-9e068e1f0285", 00:07:44.191 "strip_size_kb": 64, 00:07:44.191 "state": "online", 00:07:44.191 "raid_level": "raid0", 00:07:44.191 "superblock": true, 00:07:44.191 "num_base_bdevs": 2, 00:07:44.191 "num_base_bdevs_discovered": 2, 00:07:44.191 "num_base_bdevs_operational": 2, 00:07:44.191 "base_bdevs_list": [ 00:07:44.191 { 00:07:44.191 "name": "BaseBdev1", 00:07:44.191 "uuid": "0327839d-fded-5977-bb7d-9ad484427ec1", 00:07:44.191 "is_configured": true, 00:07:44.191 "data_offset": 2048, 00:07:44.191 "data_size": 63488 00:07:44.191 }, 00:07:44.191 { 00:07:44.191 "name": "BaseBdev2", 00:07:44.191 "uuid": "9fba0bcc-800b-58cc-bb28-cabb77dfec32", 00:07:44.191 "is_configured": true, 00:07:44.191 "data_offset": 2048, 00:07:44.191 "data_size": 63488 00:07:44.191 } 00:07:44.191 ] 00:07:44.191 }' 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:44.191 12:33:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.451 12:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:44.451 12:33:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:44.711 [2024-12-14 12:33:44.232902] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.650 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:45.650 "name": "raid_bdev1", 00:07:45.650 "uuid": "6f796ed1-f39e-4a2b-af14-9e068e1f0285", 00:07:45.650 "strip_size_kb": 64, 00:07:45.650 "state": "online", 00:07:45.651 "raid_level": "raid0", 00:07:45.651 "superblock": true, 00:07:45.651 "num_base_bdevs": 2, 00:07:45.651 "num_base_bdevs_discovered": 2, 00:07:45.651 "num_base_bdevs_operational": 2, 00:07:45.651 "base_bdevs_list": [ 00:07:45.651 { 00:07:45.651 "name": "BaseBdev1", 00:07:45.651 "uuid": "0327839d-fded-5977-bb7d-9ad484427ec1", 00:07:45.651 "is_configured": true, 00:07:45.651 "data_offset": 2048, 00:07:45.651 "data_size": 63488 00:07:45.651 }, 00:07:45.651 { 00:07:45.651 "name": "BaseBdev2", 00:07:45.651 "uuid": "9fba0bcc-800b-58cc-bb28-cabb77dfec32", 00:07:45.651 "is_configured": true, 00:07:45.651 "data_offset": 2048, 00:07:45.651 "data_size": 63488 00:07:45.651 } 00:07:45.651 ] 00:07:45.651 }' 00:07:45.651 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:45.651 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.910 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:45.910 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.910 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.910 [2024-12-14 12:33:45.617165] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:45.910 [2024-12-14 12:33:45.617266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:45.910 [2024-12-14 12:33:45.619984] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.910 [2024-12-14 12:33:45.620086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.910 [2024-12-14 12:33:45.620139] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.910 [2024-12-14 12:33:45.620202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:45.910 { 00:07:45.910 "results": [ 00:07:45.910 { 00:07:45.910 "job": "raid_bdev1", 00:07:45.910 "core_mask": "0x1", 00:07:45.910 "workload": "randrw", 00:07:45.910 "percentage": 50, 00:07:45.910 "status": "finished", 00:07:45.910 "queue_depth": 1, 00:07:45.910 "io_size": 131072, 00:07:45.910 "runtime": 1.385356, 00:07:45.910 "iops": 16114.991381276726, 00:07:45.910 "mibps": 2014.3739226595908, 00:07:45.910 "io_failed": 1, 00:07:45.910 "io_timeout": 0, 00:07:45.910 "avg_latency_us": 85.80076649035902, 00:07:45.911 "min_latency_us": 25.7117903930131, 00:07:45.911 "max_latency_us": 1452.380786026201 00:07:45.911 } 00:07:45.911 ], 00:07:45.911 "core_count": 1 00:07:45.911 } 00:07:45.911 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.911 12:33:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63377 00:07:45.911 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63377 ']' 00:07:45.911 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63377 00:07:45.911 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:45.911 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.911 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63377 00:07:46.170 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.170 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.170 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63377' 00:07:46.170 killing process with pid 63377 00:07:46.170 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63377 00:07:46.170 [2024-12-14 12:33:45.666373] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:46.170 12:33:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63377 00:07:46.170 [2024-12-14 12:33:45.800024] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Xs2cKZIsTM 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:47.552 00:07:47.552 real 0m4.349s 00:07:47.552 user 0m5.239s 00:07:47.552 sys 0m0.518s 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.552 12:33:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.552 ************************************ 00:07:47.552 END TEST raid_write_error_test 00:07:47.552 ************************************ 00:07:47.552 12:33:47 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:47.552 12:33:47 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:47.552 12:33:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:47.552 12:33:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.552 12:33:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:47.552 ************************************ 00:07:47.552 START TEST raid_state_function_test 00:07:47.552 ************************************ 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63515 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63515' 00:07:47.552 Process raid pid: 63515 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63515 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63515 ']' 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:47.552 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.552 [2024-12-14 12:33:47.136736] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:47.552 [2024-12-14 12:33:47.136942] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.812 [2024-12-14 12:33:47.309299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.812 [2024-12-14 12:33:47.424802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.072 [2024-12-14 12:33:47.617523] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.072 [2024-12-14 12:33:47.617561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.331 [2024-12-14 12:33:47.972884] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.331 [2024-12-14 12:33:47.972939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.331 [2024-12-14 12:33:47.972950] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.331 [2024-12-14 12:33:47.972959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.331 12:33:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.331 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.331 "name": "Existed_Raid", 00:07:48.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.331 "strip_size_kb": 64, 00:07:48.331 "state": "configuring", 00:07:48.331 "raid_level": "concat", 00:07:48.331 "superblock": false, 00:07:48.331 "num_base_bdevs": 2, 00:07:48.331 "num_base_bdevs_discovered": 0, 00:07:48.331 "num_base_bdevs_operational": 2, 00:07:48.331 "base_bdevs_list": [ 00:07:48.331 { 00:07:48.331 "name": "BaseBdev1", 00:07:48.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.331 "is_configured": false, 00:07:48.331 "data_offset": 0, 00:07:48.331 "data_size": 0 00:07:48.331 }, 00:07:48.331 { 00:07:48.331 "name": "BaseBdev2", 00:07:48.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.331 "is_configured": false, 00:07:48.331 "data_offset": 0, 00:07:48.331 "data_size": 0 00:07:48.331 } 00:07:48.331 ] 00:07:48.331 }' 00:07:48.331 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.331 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 [2024-12-14 12:33:48.452066] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:48.916 [2024-12-14 12:33:48.452157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 [2024-12-14 12:33:48.464007] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:48.916 [2024-12-14 12:33:48.464124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:48.916 [2024-12-14 12:33:48.464160] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:48.916 [2024-12-14 12:33:48.464190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 [2024-12-14 12:33:48.511249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:48.916 BaseBdev1 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.916 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.916 [ 00:07:48.916 { 00:07:48.916 "name": "BaseBdev1", 00:07:48.916 "aliases": [ 00:07:48.916 "eaaca4bc-2c5b-4ece-85a0-de2c5b87dd13" 00:07:48.916 ], 00:07:48.916 "product_name": "Malloc disk", 00:07:48.916 "block_size": 512, 00:07:48.916 "num_blocks": 65536, 00:07:48.916 "uuid": "eaaca4bc-2c5b-4ece-85a0-de2c5b87dd13", 00:07:48.916 "assigned_rate_limits": { 00:07:48.916 "rw_ios_per_sec": 0, 00:07:48.916 "rw_mbytes_per_sec": 0, 00:07:48.916 "r_mbytes_per_sec": 0, 00:07:48.916 "w_mbytes_per_sec": 0 00:07:48.916 }, 00:07:48.916 "claimed": true, 00:07:48.916 "claim_type": "exclusive_write", 00:07:48.916 "zoned": false, 00:07:48.917 "supported_io_types": { 00:07:48.917 "read": true, 00:07:48.917 "write": true, 00:07:48.917 "unmap": true, 00:07:48.917 "flush": true, 00:07:48.917 "reset": true, 00:07:48.917 "nvme_admin": false, 00:07:48.917 "nvme_io": false, 00:07:48.917 "nvme_io_md": false, 00:07:48.917 "write_zeroes": true, 00:07:48.917 "zcopy": true, 00:07:48.917 "get_zone_info": false, 00:07:48.917 "zone_management": false, 00:07:48.917 "zone_append": false, 00:07:48.917 "compare": false, 00:07:48.917 "compare_and_write": false, 00:07:48.917 "abort": true, 00:07:48.917 "seek_hole": false, 00:07:48.917 "seek_data": false, 00:07:48.917 "copy": true, 00:07:48.917 "nvme_iov_md": false 00:07:48.917 }, 00:07:48.917 "memory_domains": [ 00:07:48.917 { 00:07:48.917 "dma_device_id": "system", 00:07:48.917 "dma_device_type": 1 00:07:48.917 }, 00:07:48.917 { 00:07:48.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.917 "dma_device_type": 2 00:07:48.917 } 00:07:48.917 ], 00:07:48.917 "driver_specific": {} 00:07:48.917 } 00:07:48.917 ] 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.917 "name": "Existed_Raid", 00:07:48.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.917 "strip_size_kb": 64, 00:07:48.917 "state": "configuring", 00:07:48.917 "raid_level": "concat", 00:07:48.917 "superblock": false, 00:07:48.917 "num_base_bdevs": 2, 00:07:48.917 "num_base_bdevs_discovered": 1, 00:07:48.917 "num_base_bdevs_operational": 2, 00:07:48.917 "base_bdevs_list": [ 00:07:48.917 { 00:07:48.917 "name": "BaseBdev1", 00:07:48.917 "uuid": "eaaca4bc-2c5b-4ece-85a0-de2c5b87dd13", 00:07:48.917 "is_configured": true, 00:07:48.917 "data_offset": 0, 00:07:48.917 "data_size": 65536 00:07:48.917 }, 00:07:48.917 { 00:07:48.917 "name": "BaseBdev2", 00:07:48.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.917 "is_configured": false, 00:07:48.917 "data_offset": 0, 00:07:48.917 "data_size": 0 00:07:48.917 } 00:07:48.917 ] 00:07:48.917 }' 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.917 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.493 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:49.493 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.493 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.493 [2024-12-14 12:33:48.994481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:49.493 [2024-12-14 12:33:48.994606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:49.493 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.493 12:33:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:49.493 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.493 12:33:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.493 [2024-12-14 12:33:49.006509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:49.493 [2024-12-14 12:33:49.008438] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:49.493 [2024-12-14 12:33:49.008482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.493 "name": "Existed_Raid", 00:07:49.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.493 "strip_size_kb": 64, 00:07:49.493 "state": "configuring", 00:07:49.493 "raid_level": "concat", 00:07:49.493 "superblock": false, 00:07:49.493 "num_base_bdevs": 2, 00:07:49.493 "num_base_bdevs_discovered": 1, 00:07:49.493 "num_base_bdevs_operational": 2, 00:07:49.493 "base_bdevs_list": [ 00:07:49.493 { 00:07:49.493 "name": "BaseBdev1", 00:07:49.493 "uuid": "eaaca4bc-2c5b-4ece-85a0-de2c5b87dd13", 00:07:49.493 "is_configured": true, 00:07:49.493 "data_offset": 0, 00:07:49.493 "data_size": 65536 00:07:49.493 }, 00:07:49.493 { 00:07:49.493 "name": "BaseBdev2", 00:07:49.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.493 "is_configured": false, 00:07:49.493 "data_offset": 0, 00:07:49.493 "data_size": 0 00:07:49.493 } 00:07:49.493 ] 00:07:49.493 }' 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.493 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.753 [2024-12-14 12:33:49.457653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:49.753 [2024-12-14 12:33:49.457781] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:49.753 [2024-12-14 12:33:49.457812] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:49.753 [2024-12-14 12:33:49.458192] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:49.753 [2024-12-14 12:33:49.458461] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:49.753 [2024-12-14 12:33:49.458516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:49.753 [2024-12-14 12:33:49.458830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:49.753 BaseBdev2 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.753 [ 00:07:49.753 { 00:07:49.753 "name": "BaseBdev2", 00:07:49.753 "aliases": [ 00:07:49.753 "441566ec-2ee2-4088-9fb2-f4e2bed5d53a" 00:07:49.753 ], 00:07:49.753 "product_name": "Malloc disk", 00:07:49.753 "block_size": 512, 00:07:49.753 "num_blocks": 65536, 00:07:49.753 "uuid": "441566ec-2ee2-4088-9fb2-f4e2bed5d53a", 00:07:49.753 "assigned_rate_limits": { 00:07:49.753 "rw_ios_per_sec": 0, 00:07:49.753 "rw_mbytes_per_sec": 0, 00:07:49.753 "r_mbytes_per_sec": 0, 00:07:49.753 "w_mbytes_per_sec": 0 00:07:49.753 }, 00:07:49.753 "claimed": true, 00:07:49.753 "claim_type": "exclusive_write", 00:07:49.753 "zoned": false, 00:07:49.753 "supported_io_types": { 00:07:49.753 "read": true, 00:07:49.753 "write": true, 00:07:49.753 "unmap": true, 00:07:49.753 "flush": true, 00:07:49.753 "reset": true, 00:07:49.753 "nvme_admin": false, 00:07:49.753 "nvme_io": false, 00:07:49.753 "nvme_io_md": false, 00:07:49.753 "write_zeroes": true, 00:07:49.753 "zcopy": true, 00:07:49.753 "get_zone_info": false, 00:07:49.753 "zone_management": false, 00:07:49.753 "zone_append": false, 00:07:49.753 "compare": false, 00:07:49.753 "compare_and_write": false, 00:07:49.753 "abort": true, 00:07:49.753 "seek_hole": false, 00:07:49.753 "seek_data": false, 00:07:49.753 "copy": true, 00:07:49.753 "nvme_iov_md": false 00:07:49.753 }, 00:07:49.753 "memory_domains": [ 00:07:49.753 { 00:07:49.753 "dma_device_id": "system", 00:07:49.753 "dma_device_type": 1 00:07:49.753 }, 00:07:49.753 { 00:07:49.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.753 "dma_device_type": 2 00:07:49.753 } 00:07:49.753 ], 00:07:49.753 "driver_specific": {} 00:07:49.753 } 00:07:49.753 ] 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.753 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.013 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.013 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.013 "name": "Existed_Raid", 00:07:50.013 "uuid": "f577c8c8-36b9-41e9-bf5f-f9ff73bf2dff", 00:07:50.013 "strip_size_kb": 64, 00:07:50.013 "state": "online", 00:07:50.013 "raid_level": "concat", 00:07:50.013 "superblock": false, 00:07:50.013 "num_base_bdevs": 2, 00:07:50.013 "num_base_bdevs_discovered": 2, 00:07:50.013 "num_base_bdevs_operational": 2, 00:07:50.013 "base_bdevs_list": [ 00:07:50.013 { 00:07:50.013 "name": "BaseBdev1", 00:07:50.013 "uuid": "eaaca4bc-2c5b-4ece-85a0-de2c5b87dd13", 00:07:50.013 "is_configured": true, 00:07:50.013 "data_offset": 0, 00:07:50.013 "data_size": 65536 00:07:50.013 }, 00:07:50.013 { 00:07:50.013 "name": "BaseBdev2", 00:07:50.013 "uuid": "441566ec-2ee2-4088-9fb2-f4e2bed5d53a", 00:07:50.013 "is_configured": true, 00:07:50.013 "data_offset": 0, 00:07:50.013 "data_size": 65536 00:07:50.013 } 00:07:50.013 ] 00:07:50.013 }' 00:07:50.013 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.013 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.273 [2024-12-14 12:33:49.889240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:50.273 "name": "Existed_Raid", 00:07:50.273 "aliases": [ 00:07:50.273 "f577c8c8-36b9-41e9-bf5f-f9ff73bf2dff" 00:07:50.273 ], 00:07:50.273 "product_name": "Raid Volume", 00:07:50.273 "block_size": 512, 00:07:50.273 "num_blocks": 131072, 00:07:50.273 "uuid": "f577c8c8-36b9-41e9-bf5f-f9ff73bf2dff", 00:07:50.273 "assigned_rate_limits": { 00:07:50.273 "rw_ios_per_sec": 0, 00:07:50.273 "rw_mbytes_per_sec": 0, 00:07:50.273 "r_mbytes_per_sec": 0, 00:07:50.273 "w_mbytes_per_sec": 0 00:07:50.273 }, 00:07:50.273 "claimed": false, 00:07:50.273 "zoned": false, 00:07:50.273 "supported_io_types": { 00:07:50.273 "read": true, 00:07:50.273 "write": true, 00:07:50.273 "unmap": true, 00:07:50.273 "flush": true, 00:07:50.273 "reset": true, 00:07:50.273 "nvme_admin": false, 00:07:50.273 "nvme_io": false, 00:07:50.273 "nvme_io_md": false, 00:07:50.273 "write_zeroes": true, 00:07:50.273 "zcopy": false, 00:07:50.273 "get_zone_info": false, 00:07:50.273 "zone_management": false, 00:07:50.273 "zone_append": false, 00:07:50.273 "compare": false, 00:07:50.273 "compare_and_write": false, 00:07:50.273 "abort": false, 00:07:50.273 "seek_hole": false, 00:07:50.273 "seek_data": false, 00:07:50.273 "copy": false, 00:07:50.273 "nvme_iov_md": false 00:07:50.273 }, 00:07:50.273 "memory_domains": [ 00:07:50.273 { 00:07:50.273 "dma_device_id": "system", 00:07:50.273 "dma_device_type": 1 00:07:50.273 }, 00:07:50.273 { 00:07:50.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.273 "dma_device_type": 2 00:07:50.273 }, 00:07:50.273 { 00:07:50.273 "dma_device_id": "system", 00:07:50.273 "dma_device_type": 1 00:07:50.273 }, 00:07:50.273 { 00:07:50.273 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.273 "dma_device_type": 2 00:07:50.273 } 00:07:50.273 ], 00:07:50.273 "driver_specific": { 00:07:50.273 "raid": { 00:07:50.273 "uuid": "f577c8c8-36b9-41e9-bf5f-f9ff73bf2dff", 00:07:50.273 "strip_size_kb": 64, 00:07:50.273 "state": "online", 00:07:50.273 "raid_level": "concat", 00:07:50.273 "superblock": false, 00:07:50.273 "num_base_bdevs": 2, 00:07:50.273 "num_base_bdevs_discovered": 2, 00:07:50.273 "num_base_bdevs_operational": 2, 00:07:50.273 "base_bdevs_list": [ 00:07:50.273 { 00:07:50.273 "name": "BaseBdev1", 00:07:50.273 "uuid": "eaaca4bc-2c5b-4ece-85a0-de2c5b87dd13", 00:07:50.273 "is_configured": true, 00:07:50.273 "data_offset": 0, 00:07:50.273 "data_size": 65536 00:07:50.273 }, 00:07:50.273 { 00:07:50.273 "name": "BaseBdev2", 00:07:50.273 "uuid": "441566ec-2ee2-4088-9fb2-f4e2bed5d53a", 00:07:50.273 "is_configured": true, 00:07:50.273 "data_offset": 0, 00:07:50.273 "data_size": 65536 00:07:50.273 } 00:07:50.273 ] 00:07:50.273 } 00:07:50.273 } 00:07:50.273 }' 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:50.273 BaseBdev2' 00:07:50.273 12:33:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.533 [2024-12-14 12:33:50.104707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.533 [2024-12-14 12:33:50.104746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.533 [2024-12-14 12:33:50.104797] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.533 "name": "Existed_Raid", 00:07:50.533 "uuid": "f577c8c8-36b9-41e9-bf5f-f9ff73bf2dff", 00:07:50.533 "strip_size_kb": 64, 00:07:50.533 "state": "offline", 00:07:50.533 "raid_level": "concat", 00:07:50.533 "superblock": false, 00:07:50.533 "num_base_bdevs": 2, 00:07:50.533 "num_base_bdevs_discovered": 1, 00:07:50.533 "num_base_bdevs_operational": 1, 00:07:50.533 "base_bdevs_list": [ 00:07:50.533 { 00:07:50.533 "name": null, 00:07:50.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.533 "is_configured": false, 00:07:50.533 "data_offset": 0, 00:07:50.533 "data_size": 65536 00:07:50.533 }, 00:07:50.533 { 00:07:50.533 "name": "BaseBdev2", 00:07:50.533 "uuid": "441566ec-2ee2-4088-9fb2-f4e2bed5d53a", 00:07:50.533 "is_configured": true, 00:07:50.533 "data_offset": 0, 00:07:50.533 "data_size": 65536 00:07:50.533 } 00:07:50.533 ] 00:07:50.533 }' 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.533 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.105 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:51.105 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.105 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.106 [2024-12-14 12:33:50.685075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:51.106 [2024-12-14 12:33:50.685219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63515 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63515 ']' 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63515 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:51.106 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63515 00:07:51.364 killing process with pid 63515 00:07:51.364 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:51.364 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:51.364 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63515' 00:07:51.364 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63515 00:07:51.364 [2024-12-14 12:33:50.875688] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.364 12:33:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63515 00:07:51.364 [2024-12-14 12:33:50.893342] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:52.301 12:33:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:52.301 00:07:52.301 real 0m4.961s 00:07:52.301 user 0m7.165s 00:07:52.301 sys 0m0.764s 00:07:52.301 ************************************ 00:07:52.301 END TEST raid_state_function_test 00:07:52.301 ************************************ 00:07:52.301 12:33:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.301 12:33:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.560 12:33:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:52.561 12:33:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:52.561 12:33:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.561 12:33:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:52.561 ************************************ 00:07:52.561 START TEST raid_state_function_test_sb 00:07:52.561 ************************************ 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:52.561 Process raid pid: 63768 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63768 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63768' 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63768 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63768 ']' 00:07:52.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.561 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:52.561 [2024-12-14 12:33:52.159783] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:52.561 [2024-12-14 12:33:52.159897] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.820 [2024-12-14 12:33:52.332781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.820 [2024-12-14 12:33:52.447772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.079 [2024-12-14 12:33:52.649165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.079 [2024-12-14 12:33:52.649258] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.339 [2024-12-14 12:33:52.993753] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.339 [2024-12-14 12:33:52.993873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.339 [2024-12-14 12:33:52.993888] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.339 [2024-12-14 12:33:52.993899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.339 12:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.339 "name": "Existed_Raid", 00:07:53.339 "uuid": "2030c28c-a7ac-4a5d-af0e-b116ffaa858b", 00:07:53.339 "strip_size_kb": 64, 00:07:53.339 "state": "configuring", 00:07:53.339 "raid_level": "concat", 00:07:53.339 "superblock": true, 00:07:53.339 "num_base_bdevs": 2, 00:07:53.339 "num_base_bdevs_discovered": 0, 00:07:53.339 "num_base_bdevs_operational": 2, 00:07:53.339 "base_bdevs_list": [ 00:07:53.339 { 00:07:53.339 "name": "BaseBdev1", 00:07:53.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.339 "is_configured": false, 00:07:53.339 "data_offset": 0, 00:07:53.339 "data_size": 0 00:07:53.339 }, 00:07:53.339 { 00:07:53.339 "name": "BaseBdev2", 00:07:53.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.339 "is_configured": false, 00:07:53.339 "data_offset": 0, 00:07:53.339 "data_size": 0 00:07:53.339 } 00:07:53.339 ] 00:07:53.339 }' 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.339 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.908 [2024-12-14 12:33:53.412966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:53.908 [2024-12-14 12:33:53.413069] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.908 [2024-12-14 12:33:53.424939] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:53.908 [2024-12-14 12:33:53.425017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:53.908 [2024-12-14 12:33:53.425056] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:53.908 [2024-12-14 12:33:53.425082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.908 [2024-12-14 12:33:53.471115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:53.908 BaseBdev1 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.908 [ 00:07:53.908 { 00:07:53.908 "name": "BaseBdev1", 00:07:53.908 "aliases": [ 00:07:53.908 "73d719f8-e90b-4aa4-b58a-bb712a0c2480" 00:07:53.908 ], 00:07:53.908 "product_name": "Malloc disk", 00:07:53.908 "block_size": 512, 00:07:53.908 "num_blocks": 65536, 00:07:53.908 "uuid": "73d719f8-e90b-4aa4-b58a-bb712a0c2480", 00:07:53.908 "assigned_rate_limits": { 00:07:53.908 "rw_ios_per_sec": 0, 00:07:53.908 "rw_mbytes_per_sec": 0, 00:07:53.908 "r_mbytes_per_sec": 0, 00:07:53.908 "w_mbytes_per_sec": 0 00:07:53.908 }, 00:07:53.908 "claimed": true, 00:07:53.908 "claim_type": "exclusive_write", 00:07:53.908 "zoned": false, 00:07:53.908 "supported_io_types": { 00:07:53.908 "read": true, 00:07:53.908 "write": true, 00:07:53.908 "unmap": true, 00:07:53.908 "flush": true, 00:07:53.908 "reset": true, 00:07:53.908 "nvme_admin": false, 00:07:53.908 "nvme_io": false, 00:07:53.908 "nvme_io_md": false, 00:07:53.908 "write_zeroes": true, 00:07:53.908 "zcopy": true, 00:07:53.908 "get_zone_info": false, 00:07:53.908 "zone_management": false, 00:07:53.908 "zone_append": false, 00:07:53.908 "compare": false, 00:07:53.908 "compare_and_write": false, 00:07:53.908 "abort": true, 00:07:53.908 "seek_hole": false, 00:07:53.908 "seek_data": false, 00:07:53.908 "copy": true, 00:07:53.908 "nvme_iov_md": false 00:07:53.908 }, 00:07:53.908 "memory_domains": [ 00:07:53.908 { 00:07:53.908 "dma_device_id": "system", 00:07:53.908 "dma_device_type": 1 00:07:53.908 }, 00:07:53.908 { 00:07:53.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.908 "dma_device_type": 2 00:07:53.908 } 00:07:53.908 ], 00:07:53.908 "driver_specific": {} 00:07:53.908 } 00:07:53.908 ] 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.908 "name": "Existed_Raid", 00:07:53.908 "uuid": "351f4e93-8e5d-446b-b4a3-80a0fad5b1b1", 00:07:53.908 "strip_size_kb": 64, 00:07:53.908 "state": "configuring", 00:07:53.908 "raid_level": "concat", 00:07:53.908 "superblock": true, 00:07:53.908 "num_base_bdevs": 2, 00:07:53.908 "num_base_bdevs_discovered": 1, 00:07:53.908 "num_base_bdevs_operational": 2, 00:07:53.908 "base_bdevs_list": [ 00:07:53.908 { 00:07:53.908 "name": "BaseBdev1", 00:07:53.908 "uuid": "73d719f8-e90b-4aa4-b58a-bb712a0c2480", 00:07:53.908 "is_configured": true, 00:07:53.908 "data_offset": 2048, 00:07:53.908 "data_size": 63488 00:07:53.908 }, 00:07:53.908 { 00:07:53.908 "name": "BaseBdev2", 00:07:53.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.908 "is_configured": false, 00:07:53.908 "data_offset": 0, 00:07:53.908 "data_size": 0 00:07:53.908 } 00:07:53.908 ] 00:07:53.908 }' 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.908 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.477 [2024-12-14 12:33:53.926391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.477 [2024-12-14 12:33:53.926450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.477 [2024-12-14 12:33:53.938397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.477 [2024-12-14 12:33:53.940167] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.477 [2024-12-14 12:33:53.940209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.477 "name": "Existed_Raid", 00:07:54.477 "uuid": "9aa1371a-3fc8-466c-8a5a-e86e509462e9", 00:07:54.477 "strip_size_kb": 64, 00:07:54.477 "state": "configuring", 00:07:54.477 "raid_level": "concat", 00:07:54.477 "superblock": true, 00:07:54.477 "num_base_bdevs": 2, 00:07:54.477 "num_base_bdevs_discovered": 1, 00:07:54.477 "num_base_bdevs_operational": 2, 00:07:54.477 "base_bdevs_list": [ 00:07:54.477 { 00:07:54.477 "name": "BaseBdev1", 00:07:54.477 "uuid": "73d719f8-e90b-4aa4-b58a-bb712a0c2480", 00:07:54.477 "is_configured": true, 00:07:54.477 "data_offset": 2048, 00:07:54.477 "data_size": 63488 00:07:54.477 }, 00:07:54.477 { 00:07:54.477 "name": "BaseBdev2", 00:07:54.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.477 "is_configured": false, 00:07:54.477 "data_offset": 0, 00:07:54.477 "data_size": 0 00:07:54.477 } 00:07:54.477 ] 00:07:54.477 }' 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.477 12:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.737 [2024-12-14 12:33:54.357643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.737 [2024-12-14 12:33:54.357983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:54.737 [2024-12-14 12:33:54.358032] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.737 [2024-12-14 12:33:54.358358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:54.737 [2024-12-14 12:33:54.358556] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:54.737 [2024-12-14 12:33:54.358607] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:54.737 BaseBdev2 00:07:54.737 [2024-12-14 12:33:54.358787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.737 [ 00:07:54.737 { 00:07:54.737 "name": "BaseBdev2", 00:07:54.737 "aliases": [ 00:07:54.737 "b0465436-ffe1-455c-b1f3-e89fdc1dab42" 00:07:54.737 ], 00:07:54.737 "product_name": "Malloc disk", 00:07:54.737 "block_size": 512, 00:07:54.737 "num_blocks": 65536, 00:07:54.737 "uuid": "b0465436-ffe1-455c-b1f3-e89fdc1dab42", 00:07:54.737 "assigned_rate_limits": { 00:07:54.737 "rw_ios_per_sec": 0, 00:07:54.737 "rw_mbytes_per_sec": 0, 00:07:54.737 "r_mbytes_per_sec": 0, 00:07:54.737 "w_mbytes_per_sec": 0 00:07:54.737 }, 00:07:54.737 "claimed": true, 00:07:54.737 "claim_type": "exclusive_write", 00:07:54.737 "zoned": false, 00:07:54.737 "supported_io_types": { 00:07:54.737 "read": true, 00:07:54.737 "write": true, 00:07:54.737 "unmap": true, 00:07:54.737 "flush": true, 00:07:54.737 "reset": true, 00:07:54.737 "nvme_admin": false, 00:07:54.737 "nvme_io": false, 00:07:54.737 "nvme_io_md": false, 00:07:54.737 "write_zeroes": true, 00:07:54.737 "zcopy": true, 00:07:54.737 "get_zone_info": false, 00:07:54.737 "zone_management": false, 00:07:54.737 "zone_append": false, 00:07:54.737 "compare": false, 00:07:54.737 "compare_and_write": false, 00:07:54.737 "abort": true, 00:07:54.737 "seek_hole": false, 00:07:54.737 "seek_data": false, 00:07:54.737 "copy": true, 00:07:54.737 "nvme_iov_md": false 00:07:54.737 }, 00:07:54.737 "memory_domains": [ 00:07:54.737 { 00:07:54.737 "dma_device_id": "system", 00:07:54.737 "dma_device_type": 1 00:07:54.737 }, 00:07:54.737 { 00:07:54.737 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:54.737 "dma_device_type": 2 00:07:54.737 } 00:07:54.737 ], 00:07:54.737 "driver_specific": {} 00:07:54.737 } 00:07:54.737 ] 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.737 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:54.737 "name": "Existed_Raid", 00:07:54.737 "uuid": "9aa1371a-3fc8-466c-8a5a-e86e509462e9", 00:07:54.737 "strip_size_kb": 64, 00:07:54.737 "state": "online", 00:07:54.737 "raid_level": "concat", 00:07:54.737 "superblock": true, 00:07:54.737 "num_base_bdevs": 2, 00:07:54.737 "num_base_bdevs_discovered": 2, 00:07:54.737 "num_base_bdevs_operational": 2, 00:07:54.737 "base_bdevs_list": [ 00:07:54.737 { 00:07:54.737 "name": "BaseBdev1", 00:07:54.737 "uuid": "73d719f8-e90b-4aa4-b58a-bb712a0c2480", 00:07:54.737 "is_configured": true, 00:07:54.737 "data_offset": 2048, 00:07:54.738 "data_size": 63488 00:07:54.738 }, 00:07:54.738 { 00:07:54.738 "name": "BaseBdev2", 00:07:54.738 "uuid": "b0465436-ffe1-455c-b1f3-e89fdc1dab42", 00:07:54.738 "is_configured": true, 00:07:54.738 "data_offset": 2048, 00:07:54.738 "data_size": 63488 00:07:54.738 } 00:07:54.738 ] 00:07:54.738 }' 00:07:54.738 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:54.738 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:55.306 [2024-12-14 12:33:54.789240] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:55.306 "name": "Existed_Raid", 00:07:55.306 "aliases": [ 00:07:55.306 "9aa1371a-3fc8-466c-8a5a-e86e509462e9" 00:07:55.306 ], 00:07:55.306 "product_name": "Raid Volume", 00:07:55.306 "block_size": 512, 00:07:55.306 "num_blocks": 126976, 00:07:55.306 "uuid": "9aa1371a-3fc8-466c-8a5a-e86e509462e9", 00:07:55.306 "assigned_rate_limits": { 00:07:55.306 "rw_ios_per_sec": 0, 00:07:55.306 "rw_mbytes_per_sec": 0, 00:07:55.306 "r_mbytes_per_sec": 0, 00:07:55.306 "w_mbytes_per_sec": 0 00:07:55.306 }, 00:07:55.306 "claimed": false, 00:07:55.306 "zoned": false, 00:07:55.306 "supported_io_types": { 00:07:55.306 "read": true, 00:07:55.306 "write": true, 00:07:55.306 "unmap": true, 00:07:55.306 "flush": true, 00:07:55.306 "reset": true, 00:07:55.306 "nvme_admin": false, 00:07:55.306 "nvme_io": false, 00:07:55.306 "nvme_io_md": false, 00:07:55.306 "write_zeroes": true, 00:07:55.306 "zcopy": false, 00:07:55.306 "get_zone_info": false, 00:07:55.306 "zone_management": false, 00:07:55.306 "zone_append": false, 00:07:55.306 "compare": false, 00:07:55.306 "compare_and_write": false, 00:07:55.306 "abort": false, 00:07:55.306 "seek_hole": false, 00:07:55.306 "seek_data": false, 00:07:55.306 "copy": false, 00:07:55.306 "nvme_iov_md": false 00:07:55.306 }, 00:07:55.306 "memory_domains": [ 00:07:55.306 { 00:07:55.306 "dma_device_id": "system", 00:07:55.306 "dma_device_type": 1 00:07:55.306 }, 00:07:55.306 { 00:07:55.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.306 "dma_device_type": 2 00:07:55.306 }, 00:07:55.306 { 00:07:55.306 "dma_device_id": "system", 00:07:55.306 "dma_device_type": 1 00:07:55.306 }, 00:07:55.306 { 00:07:55.306 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.306 "dma_device_type": 2 00:07:55.306 } 00:07:55.306 ], 00:07:55.306 "driver_specific": { 00:07:55.306 "raid": { 00:07:55.306 "uuid": "9aa1371a-3fc8-466c-8a5a-e86e509462e9", 00:07:55.306 "strip_size_kb": 64, 00:07:55.306 "state": "online", 00:07:55.306 "raid_level": "concat", 00:07:55.306 "superblock": true, 00:07:55.306 "num_base_bdevs": 2, 00:07:55.306 "num_base_bdevs_discovered": 2, 00:07:55.306 "num_base_bdevs_operational": 2, 00:07:55.306 "base_bdevs_list": [ 00:07:55.306 { 00:07:55.306 "name": "BaseBdev1", 00:07:55.306 "uuid": "73d719f8-e90b-4aa4-b58a-bb712a0c2480", 00:07:55.306 "is_configured": true, 00:07:55.306 "data_offset": 2048, 00:07:55.306 "data_size": 63488 00:07:55.306 }, 00:07:55.306 { 00:07:55.306 "name": "BaseBdev2", 00:07:55.306 "uuid": "b0465436-ffe1-455c-b1f3-e89fdc1dab42", 00:07:55.306 "is_configured": true, 00:07:55.306 "data_offset": 2048, 00:07:55.306 "data_size": 63488 00:07:55.306 } 00:07:55.306 ] 00:07:55.306 } 00:07:55.306 } 00:07:55.306 }' 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.306 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:55.306 BaseBdev2' 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.307 12:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.307 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:55.307 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:55.307 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:55.307 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.307 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.307 [2024-12-14 12:33:55.016593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:55.307 [2024-12-14 12:33:55.016628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:55.307 [2024-12-14 12:33:55.016680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.566 "name": "Existed_Raid", 00:07:55.566 "uuid": "9aa1371a-3fc8-466c-8a5a-e86e509462e9", 00:07:55.566 "strip_size_kb": 64, 00:07:55.566 "state": "offline", 00:07:55.566 "raid_level": "concat", 00:07:55.566 "superblock": true, 00:07:55.566 "num_base_bdevs": 2, 00:07:55.566 "num_base_bdevs_discovered": 1, 00:07:55.566 "num_base_bdevs_operational": 1, 00:07:55.566 "base_bdevs_list": [ 00:07:55.566 { 00:07:55.566 "name": null, 00:07:55.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.566 "is_configured": false, 00:07:55.566 "data_offset": 0, 00:07:55.566 "data_size": 63488 00:07:55.566 }, 00:07:55.566 { 00:07:55.566 "name": "BaseBdev2", 00:07:55.566 "uuid": "b0465436-ffe1-455c-b1f3-e89fdc1dab42", 00:07:55.566 "is_configured": true, 00:07:55.566 "data_offset": 2048, 00:07:55.566 "data_size": 63488 00:07:55.566 } 00:07:55.566 ] 00:07:55.566 }' 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.566 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.139 [2024-12-14 12:33:55.624567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:56.139 [2024-12-14 12:33:55.624625] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:56.139 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63768 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63768 ']' 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63768 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63768 00:07:56.140 killing process with pid 63768 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63768' 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63768 00:07:56.140 [2024-12-14 12:33:55.799926] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.140 12:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63768 00:07:56.140 [2024-12-14 12:33:55.817174] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:57.552 12:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:57.552 00:07:57.552 real 0m4.827s 00:07:57.552 user 0m6.925s 00:07:57.552 sys 0m0.768s 00:07:57.552 12:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.552 12:33:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.552 ************************************ 00:07:57.552 END TEST raid_state_function_test_sb 00:07:57.552 ************************************ 00:07:57.552 12:33:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:57.552 12:33:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:57.552 12:33:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.552 12:33:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:57.552 ************************************ 00:07:57.552 START TEST raid_superblock_test 00:07:57.552 ************************************ 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64019 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64019 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64019 ']' 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:57.552 12:33:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.552 [2024-12-14 12:33:57.040101] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:57.552 [2024-12-14 12:33:57.040302] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64019 ] 00:07:57.552 [2024-12-14 12:33:57.210973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.811 [2024-12-14 12:33:57.321203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.811 [2024-12-14 12:33:57.513569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:57.811 [2024-12-14 12:33:57.513598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.380 malloc1 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.380 [2024-12-14 12:33:57.920005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.380 [2024-12-14 12:33:57.920134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.380 [2024-12-14 12:33:57.920207] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:58.380 [2024-12-14 12:33:57.920246] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.380 [2024-12-14 12:33:57.922418] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.380 [2024-12-14 12:33:57.922504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.380 pt1 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.380 malloc2 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.380 [2024-12-14 12:33:57.976550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.380 [2024-12-14 12:33:57.976678] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.380 [2024-12-14 12:33:57.976733] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:58.380 [2024-12-14 12:33:57.976772] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.380 [2024-12-14 12:33:57.979306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.380 [2024-12-14 12:33:57.979408] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.380 pt2 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.380 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.380 [2024-12-14 12:33:57.988593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.380 [2024-12-14 12:33:57.990518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.381 [2024-12-14 12:33:57.990753] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:58.381 [2024-12-14 12:33:57.990806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.381 [2024-12-14 12:33:57.991125] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:58.381 [2024-12-14 12:33:57.991330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:58.381 [2024-12-14 12:33:57.991378] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:58.381 [2024-12-14 12:33:57.991614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.381 12:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.381 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.381 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.381 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.381 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.381 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.381 "name": "raid_bdev1", 00:07:58.381 "uuid": "22f45857-5073-49f5-a4a7-dad1ee4b2216", 00:07:58.381 "strip_size_kb": 64, 00:07:58.381 "state": "online", 00:07:58.381 "raid_level": "concat", 00:07:58.381 "superblock": true, 00:07:58.381 "num_base_bdevs": 2, 00:07:58.381 "num_base_bdevs_discovered": 2, 00:07:58.381 "num_base_bdevs_operational": 2, 00:07:58.381 "base_bdevs_list": [ 00:07:58.381 { 00:07:58.381 "name": "pt1", 00:07:58.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.381 "is_configured": true, 00:07:58.381 "data_offset": 2048, 00:07:58.381 "data_size": 63488 00:07:58.381 }, 00:07:58.381 { 00:07:58.381 "name": "pt2", 00:07:58.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.381 "is_configured": true, 00:07:58.381 "data_offset": 2048, 00:07:58.381 "data_size": 63488 00:07:58.381 } 00:07:58.381 ] 00:07:58.381 }' 00:07:58.381 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.381 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.949 [2024-12-14 12:33:58.452033] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:58.949 "name": "raid_bdev1", 00:07:58.949 "aliases": [ 00:07:58.949 "22f45857-5073-49f5-a4a7-dad1ee4b2216" 00:07:58.949 ], 00:07:58.949 "product_name": "Raid Volume", 00:07:58.949 "block_size": 512, 00:07:58.949 "num_blocks": 126976, 00:07:58.949 "uuid": "22f45857-5073-49f5-a4a7-dad1ee4b2216", 00:07:58.949 "assigned_rate_limits": { 00:07:58.949 "rw_ios_per_sec": 0, 00:07:58.949 "rw_mbytes_per_sec": 0, 00:07:58.949 "r_mbytes_per_sec": 0, 00:07:58.949 "w_mbytes_per_sec": 0 00:07:58.949 }, 00:07:58.949 "claimed": false, 00:07:58.949 "zoned": false, 00:07:58.949 "supported_io_types": { 00:07:58.949 "read": true, 00:07:58.949 "write": true, 00:07:58.949 "unmap": true, 00:07:58.949 "flush": true, 00:07:58.949 "reset": true, 00:07:58.949 "nvme_admin": false, 00:07:58.949 "nvme_io": false, 00:07:58.949 "nvme_io_md": false, 00:07:58.949 "write_zeroes": true, 00:07:58.949 "zcopy": false, 00:07:58.949 "get_zone_info": false, 00:07:58.949 "zone_management": false, 00:07:58.949 "zone_append": false, 00:07:58.949 "compare": false, 00:07:58.949 "compare_and_write": false, 00:07:58.949 "abort": false, 00:07:58.949 "seek_hole": false, 00:07:58.949 "seek_data": false, 00:07:58.949 "copy": false, 00:07:58.949 "nvme_iov_md": false 00:07:58.949 }, 00:07:58.949 "memory_domains": [ 00:07:58.949 { 00:07:58.949 "dma_device_id": "system", 00:07:58.949 "dma_device_type": 1 00:07:58.949 }, 00:07:58.949 { 00:07:58.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.949 "dma_device_type": 2 00:07:58.949 }, 00:07:58.949 { 00:07:58.949 "dma_device_id": "system", 00:07:58.949 "dma_device_type": 1 00:07:58.949 }, 00:07:58.949 { 00:07:58.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:58.949 "dma_device_type": 2 00:07:58.949 } 00:07:58.949 ], 00:07:58.949 "driver_specific": { 00:07:58.949 "raid": { 00:07:58.949 "uuid": "22f45857-5073-49f5-a4a7-dad1ee4b2216", 00:07:58.949 "strip_size_kb": 64, 00:07:58.949 "state": "online", 00:07:58.949 "raid_level": "concat", 00:07:58.949 "superblock": true, 00:07:58.949 "num_base_bdevs": 2, 00:07:58.949 "num_base_bdevs_discovered": 2, 00:07:58.949 "num_base_bdevs_operational": 2, 00:07:58.949 "base_bdevs_list": [ 00:07:58.949 { 00:07:58.949 "name": "pt1", 00:07:58.949 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.949 "is_configured": true, 00:07:58.949 "data_offset": 2048, 00:07:58.949 "data_size": 63488 00:07:58.949 }, 00:07:58.949 { 00:07:58.949 "name": "pt2", 00:07:58.949 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.949 "is_configured": true, 00:07:58.949 "data_offset": 2048, 00:07:58.949 "data_size": 63488 00:07:58.949 } 00:07:58.949 ] 00:07:58.949 } 00:07:58.949 } 00:07:58.949 }' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:58.949 pt2' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.949 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 [2024-12-14 12:33:58.691599] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=22f45857-5073-49f5-a4a7-dad1ee4b2216 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 22f45857-5073-49f5-a4a7-dad1ee4b2216 ']' 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 [2024-12-14 12:33:58.739222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.210 [2024-12-14 12:33:58.739282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:59.210 [2024-12-14 12:33:58.739378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:59.210 [2024-12-14 12:33:58.739442] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:59.210 [2024-12-14 12:33:58.739478] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 [2024-12-14 12:33:58.879025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:59.210 [2024-12-14 12:33:58.880777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:59.210 [2024-12-14 12:33:58.880838] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:59.210 [2024-12-14 12:33:58.880889] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:59.210 [2024-12-14 12:33:58.880902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:59.210 [2024-12-14 12:33:58.880912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:59.210 request: 00:07:59.210 { 00:07:59.210 "name": "raid_bdev1", 00:07:59.210 "raid_level": "concat", 00:07:59.210 "base_bdevs": [ 00:07:59.210 "malloc1", 00:07:59.210 "malloc2" 00:07:59.210 ], 00:07:59.210 "strip_size_kb": 64, 00:07:59.210 "superblock": false, 00:07:59.210 "method": "bdev_raid_create", 00:07:59.210 "req_id": 1 00:07:59.210 } 00:07:59.210 Got JSON-RPC error response 00:07:59.210 response: 00:07:59.210 { 00:07:59.210 "code": -17, 00:07:59.210 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:59.210 } 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.210 [2024-12-14 12:33:58.930916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:59.210 [2024-12-14 12:33:58.931004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.210 [2024-12-14 12:33:58.931036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:59.210 [2024-12-14 12:33:58.931071] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.210 [2024-12-14 12:33:58.933191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.210 [2024-12-14 12:33:58.933260] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:59.210 [2024-12-14 12:33:58.933351] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:59.210 [2024-12-14 12:33:58.933416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:59.210 pt1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.210 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.211 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.470 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.470 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.470 "name": "raid_bdev1", 00:07:59.470 "uuid": "22f45857-5073-49f5-a4a7-dad1ee4b2216", 00:07:59.470 "strip_size_kb": 64, 00:07:59.470 "state": "configuring", 00:07:59.470 "raid_level": "concat", 00:07:59.470 "superblock": true, 00:07:59.470 "num_base_bdevs": 2, 00:07:59.470 "num_base_bdevs_discovered": 1, 00:07:59.470 "num_base_bdevs_operational": 2, 00:07:59.470 "base_bdevs_list": [ 00:07:59.470 { 00:07:59.470 "name": "pt1", 00:07:59.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.470 "is_configured": true, 00:07:59.470 "data_offset": 2048, 00:07:59.470 "data_size": 63488 00:07:59.470 }, 00:07:59.470 { 00:07:59.470 "name": null, 00:07:59.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.470 "is_configured": false, 00:07:59.470 "data_offset": 2048, 00:07:59.470 "data_size": 63488 00:07:59.470 } 00:07:59.470 ] 00:07:59.470 }' 00:07:59.470 12:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.470 12:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.730 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:59.730 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:59.730 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:59.730 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:59.730 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.730 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.730 [2024-12-14 12:33:59.342245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:59.730 [2024-12-14 12:33:59.342315] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:59.730 [2024-12-14 12:33:59.342337] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:59.730 [2024-12-14 12:33:59.342347] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:59.730 [2024-12-14 12:33:59.342784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:59.730 [2024-12-14 12:33:59.342804] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:59.731 [2024-12-14 12:33:59.342885] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:59.731 [2024-12-14 12:33:59.342910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:59.731 [2024-12-14 12:33:59.343016] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:59.731 [2024-12-14 12:33:59.343027] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:59.731 [2024-12-14 12:33:59.343306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:59.731 [2024-12-14 12:33:59.343449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:59.731 [2024-12-14 12:33:59.343462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:59.731 [2024-12-14 12:33:59.343597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:59.731 pt2 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.731 "name": "raid_bdev1", 00:07:59.731 "uuid": "22f45857-5073-49f5-a4a7-dad1ee4b2216", 00:07:59.731 "strip_size_kb": 64, 00:07:59.731 "state": "online", 00:07:59.731 "raid_level": "concat", 00:07:59.731 "superblock": true, 00:07:59.731 "num_base_bdevs": 2, 00:07:59.731 "num_base_bdevs_discovered": 2, 00:07:59.731 "num_base_bdevs_operational": 2, 00:07:59.731 "base_bdevs_list": [ 00:07:59.731 { 00:07:59.731 "name": "pt1", 00:07:59.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.731 "is_configured": true, 00:07:59.731 "data_offset": 2048, 00:07:59.731 "data_size": 63488 00:07:59.731 }, 00:07:59.731 { 00:07:59.731 "name": "pt2", 00:07:59.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.731 "is_configured": true, 00:07:59.731 "data_offset": 2048, 00:07:59.731 "data_size": 63488 00:07:59.731 } 00:07:59.731 ] 00:07:59.731 }' 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.731 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:00.301 [2024-12-14 12:33:59.801681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.301 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:00.301 "name": "raid_bdev1", 00:08:00.301 "aliases": [ 00:08:00.301 "22f45857-5073-49f5-a4a7-dad1ee4b2216" 00:08:00.301 ], 00:08:00.301 "product_name": "Raid Volume", 00:08:00.301 "block_size": 512, 00:08:00.301 "num_blocks": 126976, 00:08:00.301 "uuid": "22f45857-5073-49f5-a4a7-dad1ee4b2216", 00:08:00.301 "assigned_rate_limits": { 00:08:00.301 "rw_ios_per_sec": 0, 00:08:00.301 "rw_mbytes_per_sec": 0, 00:08:00.301 "r_mbytes_per_sec": 0, 00:08:00.301 "w_mbytes_per_sec": 0 00:08:00.301 }, 00:08:00.301 "claimed": false, 00:08:00.301 "zoned": false, 00:08:00.301 "supported_io_types": { 00:08:00.301 "read": true, 00:08:00.301 "write": true, 00:08:00.301 "unmap": true, 00:08:00.301 "flush": true, 00:08:00.301 "reset": true, 00:08:00.301 "nvme_admin": false, 00:08:00.301 "nvme_io": false, 00:08:00.301 "nvme_io_md": false, 00:08:00.301 "write_zeroes": true, 00:08:00.301 "zcopy": false, 00:08:00.301 "get_zone_info": false, 00:08:00.302 "zone_management": false, 00:08:00.302 "zone_append": false, 00:08:00.302 "compare": false, 00:08:00.302 "compare_and_write": false, 00:08:00.302 "abort": false, 00:08:00.302 "seek_hole": false, 00:08:00.302 "seek_data": false, 00:08:00.302 "copy": false, 00:08:00.302 "nvme_iov_md": false 00:08:00.302 }, 00:08:00.302 "memory_domains": [ 00:08:00.302 { 00:08:00.302 "dma_device_id": "system", 00:08:00.302 "dma_device_type": 1 00:08:00.302 }, 00:08:00.302 { 00:08:00.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.302 "dma_device_type": 2 00:08:00.302 }, 00:08:00.302 { 00:08:00.302 "dma_device_id": "system", 00:08:00.302 "dma_device_type": 1 00:08:00.302 }, 00:08:00.302 { 00:08:00.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.302 "dma_device_type": 2 00:08:00.302 } 00:08:00.302 ], 00:08:00.302 "driver_specific": { 00:08:00.302 "raid": { 00:08:00.302 "uuid": "22f45857-5073-49f5-a4a7-dad1ee4b2216", 00:08:00.302 "strip_size_kb": 64, 00:08:00.302 "state": "online", 00:08:00.302 "raid_level": "concat", 00:08:00.302 "superblock": true, 00:08:00.302 "num_base_bdevs": 2, 00:08:00.302 "num_base_bdevs_discovered": 2, 00:08:00.302 "num_base_bdevs_operational": 2, 00:08:00.302 "base_bdevs_list": [ 00:08:00.302 { 00:08:00.302 "name": "pt1", 00:08:00.302 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:00.302 "is_configured": true, 00:08:00.302 "data_offset": 2048, 00:08:00.302 "data_size": 63488 00:08:00.302 }, 00:08:00.302 { 00:08:00.302 "name": "pt2", 00:08:00.302 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.302 "is_configured": true, 00:08:00.302 "data_offset": 2048, 00:08:00.302 "data_size": 63488 00:08:00.302 } 00:08:00.302 ] 00:08:00.302 } 00:08:00.302 } 00:08:00.302 }' 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:00.302 pt2' 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:00.302 12:33:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.302 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.302 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:00.302 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.562 [2024-12-14 12:34:00.061261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 22f45857-5073-49f5-a4a7-dad1ee4b2216 '!=' 22f45857-5073-49f5-a4a7-dad1ee4b2216 ']' 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64019 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64019 ']' 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64019 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64019 00:08:00.562 killing process with pid 64019 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64019' 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64019 00:08:00.562 [2024-12-14 12:34:00.125411] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.562 [2024-12-14 12:34:00.125504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.562 [2024-12-14 12:34:00.125554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.562 [2024-12-14 12:34:00.125566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:00.562 12:34:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64019 00:08:00.821 [2024-12-14 12:34:00.333590] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:01.759 12:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:01.759 00:08:01.759 real 0m4.452s 00:08:01.759 user 0m6.265s 00:08:01.759 sys 0m0.731s 00:08:01.759 ************************************ 00:08:01.759 END TEST raid_superblock_test 00:08:01.759 ************************************ 00:08:01.759 12:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:01.759 12:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.759 12:34:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:01.759 12:34:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:01.759 12:34:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:01.759 12:34:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:01.759 ************************************ 00:08:01.759 START TEST raid_read_error_test 00:08:01.759 ************************************ 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:01.759 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.efmezpb4J6 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64226 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64226 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 64226 ']' 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.019 12:34:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.019 [2024-12-14 12:34:01.586220] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:02.019 [2024-12-14 12:34:01.586391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64226 ] 00:08:02.278 [2024-12-14 12:34:01.760584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.278 [2024-12-14 12:34:01.869101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.538 [2024-12-14 12:34:02.059554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.538 [2024-12-14 12:34:02.059707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.798 BaseBdev1_malloc 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.798 true 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.798 [2024-12-14 12:34:02.450469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.798 [2024-12-14 12:34:02.450529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.798 [2024-12-14 12:34:02.450549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:02.798 [2024-12-14 12:34:02.450559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.798 [2024-12-14 12:34:02.452613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.798 [2024-12-14 12:34:02.452766] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.798 BaseBdev1 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.798 BaseBdev2_malloc 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.798 true 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.798 [2024-12-14 12:34:02.512285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:02.798 [2024-12-14 12:34:02.512342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.798 [2024-12-14 12:34:02.512357] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:02.798 [2024-12-14 12:34:02.512366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.798 [2024-12-14 12:34:02.514380] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.798 [2024-12-14 12:34:02.514489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:02.798 BaseBdev2 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.798 [2024-12-14 12:34:02.524318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:02.798 [2024-12-14 12:34:02.526049] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:02.798 [2024-12-14 12:34:02.526253] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.798 [2024-12-14 12:34:02.526269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:02.798 [2024-12-14 12:34:02.526481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:02.798 [2024-12-14 12:34:02.526641] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.798 [2024-12-14 12:34:02.526652] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:02.798 [2024-12-14 12:34:02.526805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.798 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.799 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.799 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.799 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.059 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.059 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.059 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.059 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.059 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.059 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.059 "name": "raid_bdev1", 00:08:03.059 "uuid": "e06b2d31-44e4-4189-9418-aaa75c7d01d3", 00:08:03.059 "strip_size_kb": 64, 00:08:03.059 "state": "online", 00:08:03.059 "raid_level": "concat", 00:08:03.059 "superblock": true, 00:08:03.059 "num_base_bdevs": 2, 00:08:03.059 "num_base_bdevs_discovered": 2, 00:08:03.059 "num_base_bdevs_operational": 2, 00:08:03.059 "base_bdevs_list": [ 00:08:03.059 { 00:08:03.059 "name": "BaseBdev1", 00:08:03.059 "uuid": "6345f58f-60f9-520a-b5d9-60e992faa033", 00:08:03.059 "is_configured": true, 00:08:03.059 "data_offset": 2048, 00:08:03.059 "data_size": 63488 00:08:03.059 }, 00:08:03.059 { 00:08:03.059 "name": "BaseBdev2", 00:08:03.059 "uuid": "e16e87f3-ea22-5a3f-864e-bcf104c14332", 00:08:03.059 "is_configured": true, 00:08:03.059 "data_offset": 2048, 00:08:03.059 "data_size": 63488 00:08:03.059 } 00:08:03.059 ] 00:08:03.059 }' 00:08:03.059 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.059 12:34:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.320 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:03.320 12:34:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:03.626 [2024-12-14 12:34:03.084506] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:04.581 12:34:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:04.581 12:34:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.581 12:34:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.581 "name": "raid_bdev1", 00:08:04.581 "uuid": "e06b2d31-44e4-4189-9418-aaa75c7d01d3", 00:08:04.581 "strip_size_kb": 64, 00:08:04.581 "state": "online", 00:08:04.581 "raid_level": "concat", 00:08:04.581 "superblock": true, 00:08:04.581 "num_base_bdevs": 2, 00:08:04.581 "num_base_bdevs_discovered": 2, 00:08:04.581 "num_base_bdevs_operational": 2, 00:08:04.581 "base_bdevs_list": [ 00:08:04.581 { 00:08:04.581 "name": "BaseBdev1", 00:08:04.581 "uuid": "6345f58f-60f9-520a-b5d9-60e992faa033", 00:08:04.581 "is_configured": true, 00:08:04.581 "data_offset": 2048, 00:08:04.581 "data_size": 63488 00:08:04.581 }, 00:08:04.581 { 00:08:04.581 "name": "BaseBdev2", 00:08:04.581 "uuid": "e16e87f3-ea22-5a3f-864e-bcf104c14332", 00:08:04.581 "is_configured": true, 00:08:04.581 "data_offset": 2048, 00:08:04.581 "data_size": 63488 00:08:04.581 } 00:08:04.581 ] 00:08:04.581 }' 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.581 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.841 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:04.841 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.841 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.841 [2024-12-14 12:34:04.484354] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:04.841 [2024-12-14 12:34:04.484389] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:04.841 [2024-12-14 12:34:04.487218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:04.841 [2024-12-14 12:34:04.487310] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.841 [2024-12-14 12:34:04.487369] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:04.841 [2024-12-14 12:34:04.487385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:04.841 { 00:08:04.841 "results": [ 00:08:04.841 { 00:08:04.841 "job": "raid_bdev1", 00:08:04.841 "core_mask": "0x1", 00:08:04.841 "workload": "randrw", 00:08:04.841 "percentage": 50, 00:08:04.841 "status": "finished", 00:08:04.842 "queue_depth": 1, 00:08:04.842 "io_size": 131072, 00:08:04.842 "runtime": 1.400792, 00:08:04.842 "iops": 16356.461201948612, 00:08:04.842 "mibps": 2044.5576502435765, 00:08:04.842 "io_failed": 1, 00:08:04.842 "io_timeout": 0, 00:08:04.842 "avg_latency_us": 84.44378826535231, 00:08:04.842 "min_latency_us": 25.2646288209607, 00:08:04.842 "max_latency_us": 1316.4436681222708 00:08:04.842 } 00:08:04.842 ], 00:08:04.842 "core_count": 1 00:08:04.842 } 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64226 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 64226 ']' 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 64226 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64226 00:08:04.842 killing process with pid 64226 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64226' 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 64226 00:08:04.842 12:34:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 64226 00:08:04.842 [2024-12-14 12:34:04.527092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:05.101 [2024-12-14 12:34:04.658435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.efmezpb4J6 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:06.481 00:08:06.481 real 0m4.337s 00:08:06.481 user 0m5.234s 00:08:06.481 sys 0m0.533s 00:08:06.481 ************************************ 00:08:06.481 END TEST raid_read_error_test 00:08:06.481 ************************************ 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.481 12:34:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.481 12:34:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:06.481 12:34:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:06.481 12:34:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.481 12:34:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.481 ************************************ 00:08:06.481 START TEST raid_write_error_test 00:08:06.481 ************************************ 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aDJvutpQ33 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64372 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64372 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64372 ']' 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.481 12:34:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.481 [2024-12-14 12:34:05.985439] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:06.481 [2024-12-14 12:34:05.985997] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64372 ] 00:08:06.481 [2024-12-14 12:34:06.156294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.741 [2024-12-14 12:34:06.263697] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.741 [2024-12-14 12:34:06.457008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.741 [2024-12-14 12:34:06.457071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.310 BaseBdev1_malloc 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.310 true 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.310 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.310 [2024-12-14 12:34:06.862531] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:07.310 [2024-12-14 12:34:06.862585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.310 [2024-12-14 12:34:06.862604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:07.310 [2024-12-14 12:34:06.862614] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.310 [2024-12-14 12:34:06.864605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.311 [2024-12-14 12:34:06.864644] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:07.311 BaseBdev1 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.311 BaseBdev2_malloc 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.311 true 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.311 [2024-12-14 12:34:06.928098] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:07.311 [2024-12-14 12:34:06.928147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:07.311 [2024-12-14 12:34:06.928162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:07.311 [2024-12-14 12:34:06.928172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:07.311 [2024-12-14 12:34:06.930141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:07.311 [2024-12-14 12:34:06.930238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:07.311 BaseBdev2 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.311 [2024-12-14 12:34:06.940139] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:07.311 [2024-12-14 12:34:06.941867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:07.311 [2024-12-14 12:34:06.942057] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:07.311 [2024-12-14 12:34:06.942073] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:07.311 [2024-12-14 12:34:06.942320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:07.311 [2024-12-14 12:34:06.942493] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:07.311 [2024-12-14 12:34:06.942511] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:07.311 [2024-12-14 12:34:06.942669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.311 "name": "raid_bdev1", 00:08:07.311 "uuid": "708fac1f-15a9-4f50-9361-3bd814d5f094", 00:08:07.311 "strip_size_kb": 64, 00:08:07.311 "state": "online", 00:08:07.311 "raid_level": "concat", 00:08:07.311 "superblock": true, 00:08:07.311 "num_base_bdevs": 2, 00:08:07.311 "num_base_bdevs_discovered": 2, 00:08:07.311 "num_base_bdevs_operational": 2, 00:08:07.311 "base_bdevs_list": [ 00:08:07.311 { 00:08:07.311 "name": "BaseBdev1", 00:08:07.311 "uuid": "f434a75b-307a-5924-9a9a-0437b44904cb", 00:08:07.311 "is_configured": true, 00:08:07.311 "data_offset": 2048, 00:08:07.311 "data_size": 63488 00:08:07.311 }, 00:08:07.311 { 00:08:07.311 "name": "BaseBdev2", 00:08:07.311 "uuid": "c00ee1fb-4ab9-5e3f-8418-d9854d599e6f", 00:08:07.311 "is_configured": true, 00:08:07.311 "data_offset": 2048, 00:08:07.311 "data_size": 63488 00:08:07.311 } 00:08:07.311 ] 00:08:07.311 }' 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.311 12:34:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.881 12:34:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:07.881 12:34:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:07.881 [2024-12-14 12:34:07.416651] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:08.820 "name": "raid_bdev1", 00:08:08.820 "uuid": "708fac1f-15a9-4f50-9361-3bd814d5f094", 00:08:08.820 "strip_size_kb": 64, 00:08:08.820 "state": "online", 00:08:08.820 "raid_level": "concat", 00:08:08.820 "superblock": true, 00:08:08.820 "num_base_bdevs": 2, 00:08:08.820 "num_base_bdevs_discovered": 2, 00:08:08.820 "num_base_bdevs_operational": 2, 00:08:08.820 "base_bdevs_list": [ 00:08:08.820 { 00:08:08.820 "name": "BaseBdev1", 00:08:08.820 "uuid": "f434a75b-307a-5924-9a9a-0437b44904cb", 00:08:08.820 "is_configured": true, 00:08:08.820 "data_offset": 2048, 00:08:08.820 "data_size": 63488 00:08:08.820 }, 00:08:08.820 { 00:08:08.820 "name": "BaseBdev2", 00:08:08.820 "uuid": "c00ee1fb-4ab9-5e3f-8418-d9854d599e6f", 00:08:08.820 "is_configured": true, 00:08:08.820 "data_offset": 2048, 00:08:08.820 "data_size": 63488 00:08:08.820 } 00:08:08.820 ] 00:08:08.820 }' 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:08.820 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.080 [2024-12-14 12:34:08.766617] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:09.080 [2024-12-14 12:34:08.766699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:09.080 [2024-12-14 12:34:08.769416] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.080 [2024-12-14 12:34:08.769462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.080 [2024-12-14 12:34:08.769493] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.080 [2024-12-14 12:34:08.769504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:09.080 { 00:08:09.080 "results": [ 00:08:09.080 { 00:08:09.080 "job": "raid_bdev1", 00:08:09.080 "core_mask": "0x1", 00:08:09.080 "workload": "randrw", 00:08:09.080 "percentage": 50, 00:08:09.080 "status": "finished", 00:08:09.080 "queue_depth": 1, 00:08:09.080 "io_size": 131072, 00:08:09.080 "runtime": 1.350813, 00:08:09.080 "iops": 16259.097299182049, 00:08:09.080 "mibps": 2032.3871623977561, 00:08:09.080 "io_failed": 1, 00:08:09.080 "io_timeout": 0, 00:08:09.080 "avg_latency_us": 85.04246488298836, 00:08:09.080 "min_latency_us": 25.041048034934498, 00:08:09.080 "max_latency_us": 1387.989519650655 00:08:09.080 } 00:08:09.080 ], 00:08:09.080 "core_count": 1 00:08:09.080 } 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64372 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64372 ']' 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64372 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64372 00:08:09.080 killing process with pid 64372 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64372' 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64372 00:08:09.080 12:34:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64372 00:08:09.080 [2024-12-14 12:34:08.802313] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.340 [2024-12-14 12:34:08.933801] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aDJvutpQ33 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:10.721 00:08:10.721 real 0m4.206s 00:08:10.721 user 0m4.972s 00:08:10.721 sys 0m0.517s 00:08:10.721 ************************************ 00:08:10.721 END TEST raid_write_error_test 00:08:10.721 ************************************ 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.721 12:34:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.721 12:34:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:10.721 12:34:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:10.721 12:34:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:10.721 12:34:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:10.721 12:34:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:10.721 ************************************ 00:08:10.721 START TEST raid_state_function_test 00:08:10.721 ************************************ 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64510 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64510' 00:08:10.721 Process raid pid: 64510 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64510 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64510 ']' 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:10.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.721 12:34:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.721 [2024-12-14 12:34:10.253792] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:10.721 [2024-12-14 12:34:10.254003] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.721 [2024-12-14 12:34:10.427671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.981 [2024-12-14 12:34:10.539612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.240 [2024-12-14 12:34:10.733429] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.240 [2024-12-14 12:34:10.733543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.552 [2024-12-14 12:34:11.077702] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.552 [2024-12-14 12:34:11.077827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.552 [2024-12-14 12:34:11.077841] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.552 [2024-12-14 12:34:11.077851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.552 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.553 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.553 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.553 "name": "Existed_Raid", 00:08:11.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.553 "strip_size_kb": 0, 00:08:11.553 "state": "configuring", 00:08:11.553 "raid_level": "raid1", 00:08:11.553 "superblock": false, 00:08:11.553 "num_base_bdevs": 2, 00:08:11.553 "num_base_bdevs_discovered": 0, 00:08:11.553 "num_base_bdevs_operational": 2, 00:08:11.553 "base_bdevs_list": [ 00:08:11.553 { 00:08:11.553 "name": "BaseBdev1", 00:08:11.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.553 "is_configured": false, 00:08:11.553 "data_offset": 0, 00:08:11.553 "data_size": 0 00:08:11.553 }, 00:08:11.553 { 00:08:11.553 "name": "BaseBdev2", 00:08:11.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.553 "is_configured": false, 00:08:11.553 "data_offset": 0, 00:08:11.553 "data_size": 0 00:08:11.553 } 00:08:11.553 ] 00:08:11.553 }' 00:08:11.553 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.553 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.812 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.812 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.812 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.812 [2024-12-14 12:34:11.449071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.812 [2024-12-14 12:34:11.449168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.813 [2024-12-14 12:34:11.457016] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.813 [2024-12-14 12:34:11.457105] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.813 [2024-12-14 12:34:11.457138] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.813 [2024-12-14 12:34:11.457180] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.813 [2024-12-14 12:34:11.501456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.813 BaseBdev1 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.813 [ 00:08:11.813 { 00:08:11.813 "name": "BaseBdev1", 00:08:11.813 "aliases": [ 00:08:11.813 "b0a19f8f-e31e-4106-87a3-9841541a16cd" 00:08:11.813 ], 00:08:11.813 "product_name": "Malloc disk", 00:08:11.813 "block_size": 512, 00:08:11.813 "num_blocks": 65536, 00:08:11.813 "uuid": "b0a19f8f-e31e-4106-87a3-9841541a16cd", 00:08:11.813 "assigned_rate_limits": { 00:08:11.813 "rw_ios_per_sec": 0, 00:08:11.813 "rw_mbytes_per_sec": 0, 00:08:11.813 "r_mbytes_per_sec": 0, 00:08:11.813 "w_mbytes_per_sec": 0 00:08:11.813 }, 00:08:11.813 "claimed": true, 00:08:11.813 "claim_type": "exclusive_write", 00:08:11.813 "zoned": false, 00:08:11.813 "supported_io_types": { 00:08:11.813 "read": true, 00:08:11.813 "write": true, 00:08:11.813 "unmap": true, 00:08:11.813 "flush": true, 00:08:11.813 "reset": true, 00:08:11.813 "nvme_admin": false, 00:08:11.813 "nvme_io": false, 00:08:11.813 "nvme_io_md": false, 00:08:11.813 "write_zeroes": true, 00:08:11.813 "zcopy": true, 00:08:11.813 "get_zone_info": false, 00:08:11.813 "zone_management": false, 00:08:11.813 "zone_append": false, 00:08:11.813 "compare": false, 00:08:11.813 "compare_and_write": false, 00:08:11.813 "abort": true, 00:08:11.813 "seek_hole": false, 00:08:11.813 "seek_data": false, 00:08:11.813 "copy": true, 00:08:11.813 "nvme_iov_md": false 00:08:11.813 }, 00:08:11.813 "memory_domains": [ 00:08:11.813 { 00:08:11.813 "dma_device_id": "system", 00:08:11.813 "dma_device_type": 1 00:08:11.813 }, 00:08:11.813 { 00:08:11.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.813 "dma_device_type": 2 00:08:11.813 } 00:08:11.813 ], 00:08:11.813 "driver_specific": {} 00:08:11.813 } 00:08:11.813 ] 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.813 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.073 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.073 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.073 "name": "Existed_Raid", 00:08:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.073 "strip_size_kb": 0, 00:08:12.073 "state": "configuring", 00:08:12.073 "raid_level": "raid1", 00:08:12.073 "superblock": false, 00:08:12.073 "num_base_bdevs": 2, 00:08:12.073 "num_base_bdevs_discovered": 1, 00:08:12.073 "num_base_bdevs_operational": 2, 00:08:12.073 "base_bdevs_list": [ 00:08:12.073 { 00:08:12.073 "name": "BaseBdev1", 00:08:12.073 "uuid": "b0a19f8f-e31e-4106-87a3-9841541a16cd", 00:08:12.073 "is_configured": true, 00:08:12.073 "data_offset": 0, 00:08:12.073 "data_size": 65536 00:08:12.073 }, 00:08:12.073 { 00:08:12.073 "name": "BaseBdev2", 00:08:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.073 "is_configured": false, 00:08:12.073 "data_offset": 0, 00:08:12.073 "data_size": 0 00:08:12.073 } 00:08:12.073 ] 00:08:12.073 }' 00:08:12.073 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.073 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.333 [2024-12-14 12:34:11.960721] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:12.333 [2024-12-14 12:34:11.960819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.333 [2024-12-14 12:34:11.972744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:12.333 [2024-12-14 12:34:11.974571] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:12.333 [2024-12-14 12:34:11.974648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.333 12:34:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.333 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.333 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.333 "name": "Existed_Raid", 00:08:12.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.333 "strip_size_kb": 0, 00:08:12.333 "state": "configuring", 00:08:12.333 "raid_level": "raid1", 00:08:12.333 "superblock": false, 00:08:12.333 "num_base_bdevs": 2, 00:08:12.333 "num_base_bdevs_discovered": 1, 00:08:12.333 "num_base_bdevs_operational": 2, 00:08:12.333 "base_bdevs_list": [ 00:08:12.333 { 00:08:12.333 "name": "BaseBdev1", 00:08:12.333 "uuid": "b0a19f8f-e31e-4106-87a3-9841541a16cd", 00:08:12.333 "is_configured": true, 00:08:12.333 "data_offset": 0, 00:08:12.333 "data_size": 65536 00:08:12.333 }, 00:08:12.333 { 00:08:12.333 "name": "BaseBdev2", 00:08:12.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.333 "is_configured": false, 00:08:12.333 "data_offset": 0, 00:08:12.333 "data_size": 0 00:08:12.333 } 00:08:12.333 ] 00:08:12.333 }' 00:08:12.333 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.333 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.903 [2024-12-14 12:34:12.395849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:12.903 [2024-12-14 12:34:12.395965] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.903 [2024-12-14 12:34:12.395991] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:12.903 [2024-12-14 12:34:12.396305] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:12.903 [2024-12-14 12:34:12.396521] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.903 [2024-12-14 12:34:12.396568] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:12.903 [2024-12-14 12:34:12.396868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.903 BaseBdev2 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.903 [ 00:08:12.903 { 00:08:12.903 "name": "BaseBdev2", 00:08:12.903 "aliases": [ 00:08:12.903 "ef7dce8f-49a7-4aac-ba78-815ebf5e2c8e" 00:08:12.903 ], 00:08:12.903 "product_name": "Malloc disk", 00:08:12.903 "block_size": 512, 00:08:12.903 "num_blocks": 65536, 00:08:12.903 "uuid": "ef7dce8f-49a7-4aac-ba78-815ebf5e2c8e", 00:08:12.903 "assigned_rate_limits": { 00:08:12.903 "rw_ios_per_sec": 0, 00:08:12.903 "rw_mbytes_per_sec": 0, 00:08:12.903 "r_mbytes_per_sec": 0, 00:08:12.903 "w_mbytes_per_sec": 0 00:08:12.903 }, 00:08:12.903 "claimed": true, 00:08:12.903 "claim_type": "exclusive_write", 00:08:12.903 "zoned": false, 00:08:12.903 "supported_io_types": { 00:08:12.903 "read": true, 00:08:12.903 "write": true, 00:08:12.903 "unmap": true, 00:08:12.903 "flush": true, 00:08:12.903 "reset": true, 00:08:12.903 "nvme_admin": false, 00:08:12.903 "nvme_io": false, 00:08:12.903 "nvme_io_md": false, 00:08:12.903 "write_zeroes": true, 00:08:12.903 "zcopy": true, 00:08:12.903 "get_zone_info": false, 00:08:12.903 "zone_management": false, 00:08:12.903 "zone_append": false, 00:08:12.903 "compare": false, 00:08:12.903 "compare_and_write": false, 00:08:12.903 "abort": true, 00:08:12.903 "seek_hole": false, 00:08:12.903 "seek_data": false, 00:08:12.903 "copy": true, 00:08:12.903 "nvme_iov_md": false 00:08:12.903 }, 00:08:12.903 "memory_domains": [ 00:08:12.903 { 00:08:12.903 "dma_device_id": "system", 00:08:12.903 "dma_device_type": 1 00:08:12.903 }, 00:08:12.903 { 00:08:12.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.903 "dma_device_type": 2 00:08:12.903 } 00:08:12.903 ], 00:08:12.903 "driver_specific": {} 00:08:12.903 } 00:08:12.903 ] 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.903 "name": "Existed_Raid", 00:08:12.903 "uuid": "ac15784f-e5bf-4f68-aa08-75c3c095687c", 00:08:12.903 "strip_size_kb": 0, 00:08:12.903 "state": "online", 00:08:12.903 "raid_level": "raid1", 00:08:12.903 "superblock": false, 00:08:12.903 "num_base_bdevs": 2, 00:08:12.903 "num_base_bdevs_discovered": 2, 00:08:12.903 "num_base_bdevs_operational": 2, 00:08:12.903 "base_bdevs_list": [ 00:08:12.903 { 00:08:12.903 "name": "BaseBdev1", 00:08:12.903 "uuid": "b0a19f8f-e31e-4106-87a3-9841541a16cd", 00:08:12.903 "is_configured": true, 00:08:12.903 "data_offset": 0, 00:08:12.903 "data_size": 65536 00:08:12.903 }, 00:08:12.903 { 00:08:12.903 "name": "BaseBdev2", 00:08:12.903 "uuid": "ef7dce8f-49a7-4aac-ba78-815ebf5e2c8e", 00:08:12.903 "is_configured": true, 00:08:12.903 "data_offset": 0, 00:08:12.903 "data_size": 65536 00:08:12.903 } 00:08:12.903 ] 00:08:12.903 }' 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.903 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.163 [2024-12-14 12:34:12.835448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.163 "name": "Existed_Raid", 00:08:13.163 "aliases": [ 00:08:13.163 "ac15784f-e5bf-4f68-aa08-75c3c095687c" 00:08:13.163 ], 00:08:13.163 "product_name": "Raid Volume", 00:08:13.163 "block_size": 512, 00:08:13.163 "num_blocks": 65536, 00:08:13.163 "uuid": "ac15784f-e5bf-4f68-aa08-75c3c095687c", 00:08:13.163 "assigned_rate_limits": { 00:08:13.163 "rw_ios_per_sec": 0, 00:08:13.163 "rw_mbytes_per_sec": 0, 00:08:13.163 "r_mbytes_per_sec": 0, 00:08:13.163 "w_mbytes_per_sec": 0 00:08:13.163 }, 00:08:13.163 "claimed": false, 00:08:13.163 "zoned": false, 00:08:13.163 "supported_io_types": { 00:08:13.163 "read": true, 00:08:13.163 "write": true, 00:08:13.163 "unmap": false, 00:08:13.163 "flush": false, 00:08:13.163 "reset": true, 00:08:13.163 "nvme_admin": false, 00:08:13.163 "nvme_io": false, 00:08:13.163 "nvme_io_md": false, 00:08:13.163 "write_zeroes": true, 00:08:13.163 "zcopy": false, 00:08:13.163 "get_zone_info": false, 00:08:13.163 "zone_management": false, 00:08:13.163 "zone_append": false, 00:08:13.163 "compare": false, 00:08:13.163 "compare_and_write": false, 00:08:13.163 "abort": false, 00:08:13.163 "seek_hole": false, 00:08:13.163 "seek_data": false, 00:08:13.163 "copy": false, 00:08:13.163 "nvme_iov_md": false 00:08:13.163 }, 00:08:13.163 "memory_domains": [ 00:08:13.163 { 00:08:13.163 "dma_device_id": "system", 00:08:13.163 "dma_device_type": 1 00:08:13.163 }, 00:08:13.163 { 00:08:13.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.163 "dma_device_type": 2 00:08:13.163 }, 00:08:13.163 { 00:08:13.163 "dma_device_id": "system", 00:08:13.163 "dma_device_type": 1 00:08:13.163 }, 00:08:13.163 { 00:08:13.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.163 "dma_device_type": 2 00:08:13.163 } 00:08:13.163 ], 00:08:13.163 "driver_specific": { 00:08:13.163 "raid": { 00:08:13.163 "uuid": "ac15784f-e5bf-4f68-aa08-75c3c095687c", 00:08:13.163 "strip_size_kb": 0, 00:08:13.163 "state": "online", 00:08:13.163 "raid_level": "raid1", 00:08:13.163 "superblock": false, 00:08:13.163 "num_base_bdevs": 2, 00:08:13.163 "num_base_bdevs_discovered": 2, 00:08:13.163 "num_base_bdevs_operational": 2, 00:08:13.163 "base_bdevs_list": [ 00:08:13.163 { 00:08:13.163 "name": "BaseBdev1", 00:08:13.163 "uuid": "b0a19f8f-e31e-4106-87a3-9841541a16cd", 00:08:13.163 "is_configured": true, 00:08:13.163 "data_offset": 0, 00:08:13.163 "data_size": 65536 00:08:13.163 }, 00:08:13.163 { 00:08:13.163 "name": "BaseBdev2", 00:08:13.163 "uuid": "ef7dce8f-49a7-4aac-ba78-815ebf5e2c8e", 00:08:13.163 "is_configured": true, 00:08:13.163 "data_offset": 0, 00:08:13.163 "data_size": 65536 00:08:13.163 } 00:08:13.163 ] 00:08:13.163 } 00:08:13.163 } 00:08:13.163 }' 00:08:13.163 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:13.424 BaseBdev2' 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.424 12:34:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.424 [2024-12-14 12:34:13.058815] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.424 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.683 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.683 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.684 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.684 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.684 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.684 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.684 "name": "Existed_Raid", 00:08:13.684 "uuid": "ac15784f-e5bf-4f68-aa08-75c3c095687c", 00:08:13.684 "strip_size_kb": 0, 00:08:13.684 "state": "online", 00:08:13.684 "raid_level": "raid1", 00:08:13.684 "superblock": false, 00:08:13.684 "num_base_bdevs": 2, 00:08:13.684 "num_base_bdevs_discovered": 1, 00:08:13.684 "num_base_bdevs_operational": 1, 00:08:13.684 "base_bdevs_list": [ 00:08:13.684 { 00:08:13.684 "name": null, 00:08:13.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.684 "is_configured": false, 00:08:13.684 "data_offset": 0, 00:08:13.684 "data_size": 65536 00:08:13.684 }, 00:08:13.684 { 00:08:13.684 "name": "BaseBdev2", 00:08:13.684 "uuid": "ef7dce8f-49a7-4aac-ba78-815ebf5e2c8e", 00:08:13.684 "is_configured": true, 00:08:13.684 "data_offset": 0, 00:08:13.684 "data_size": 65536 00:08:13.684 } 00:08:13.684 ] 00:08:13.684 }' 00:08:13.684 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.684 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.944 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.944 [2024-12-14 12:34:13.659099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.944 [2024-12-14 12:34:13.659252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:14.203 [2024-12-14 12:34:13.751728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.203 [2024-12-14 12:34:13.751866] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.203 [2024-12-14 12:34:13.751907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64510 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64510 ']' 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64510 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64510 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64510' 00:08:14.204 killing process with pid 64510 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64510 00:08:14.204 [2024-12-14 12:34:13.841257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.204 12:34:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64510 00:08:14.204 [2024-12-14 12:34:13.858043] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.585 12:34:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:15.585 00:08:15.585 real 0m4.792s 00:08:15.585 user 0m6.853s 00:08:15.585 sys 0m0.779s 00:08:15.585 12:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.585 12:34:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.585 ************************************ 00:08:15.585 END TEST raid_state_function_test 00:08:15.585 ************************************ 00:08:15.585 12:34:15 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:15.585 12:34:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:15.585 12:34:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.585 12:34:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.585 ************************************ 00:08:15.585 START TEST raid_state_function_test_sb 00:08:15.585 ************************************ 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64758 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64758' 00:08:15.585 Process raid pid: 64758 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64758 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64758 ']' 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.585 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.585 [2024-12-14 12:34:15.114398] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:15.585 [2024-12-14 12:34:15.114600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.585 [2024-12-14 12:34:15.288190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.844 [2024-12-14 12:34:15.396827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.104 [2024-12-14 12:34:15.598025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.104 [2024-12-14 12:34:15.598155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.364 [2024-12-14 12:34:15.940590] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.364 [2024-12-14 12:34:15.940641] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.364 [2024-12-14 12:34:15.940651] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.364 [2024-12-14 12:34:15.940660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.364 "name": "Existed_Raid", 00:08:16.364 "uuid": "5fb54416-b8e3-4481-a346-60320ec95531", 00:08:16.364 "strip_size_kb": 0, 00:08:16.364 "state": "configuring", 00:08:16.364 "raid_level": "raid1", 00:08:16.364 "superblock": true, 00:08:16.364 "num_base_bdevs": 2, 00:08:16.364 "num_base_bdevs_discovered": 0, 00:08:16.364 "num_base_bdevs_operational": 2, 00:08:16.364 "base_bdevs_list": [ 00:08:16.364 { 00:08:16.364 "name": "BaseBdev1", 00:08:16.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.364 "is_configured": false, 00:08:16.364 "data_offset": 0, 00:08:16.364 "data_size": 0 00:08:16.364 }, 00:08:16.364 { 00:08:16.364 "name": "BaseBdev2", 00:08:16.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.364 "is_configured": false, 00:08:16.364 "data_offset": 0, 00:08:16.364 "data_size": 0 00:08:16.364 } 00:08:16.364 ] 00:08:16.364 }' 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.364 12:34:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.938 [2024-12-14 12:34:16.391765] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.938 [2024-12-14 12:34:16.391847] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.938 [2024-12-14 12:34:16.403726] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.938 [2024-12-14 12:34:16.403800] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.938 [2024-12-14 12:34:16.403842] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.938 [2024-12-14 12:34:16.403866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.938 [2024-12-14 12:34:16.451084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.938 BaseBdev1 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.938 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.939 [ 00:08:16.939 { 00:08:16.939 "name": "BaseBdev1", 00:08:16.939 "aliases": [ 00:08:16.939 "c7ec6b2c-9220-4201-9820-7c1806a8ecc4" 00:08:16.939 ], 00:08:16.939 "product_name": "Malloc disk", 00:08:16.939 "block_size": 512, 00:08:16.939 "num_blocks": 65536, 00:08:16.939 "uuid": "c7ec6b2c-9220-4201-9820-7c1806a8ecc4", 00:08:16.939 "assigned_rate_limits": { 00:08:16.939 "rw_ios_per_sec": 0, 00:08:16.939 "rw_mbytes_per_sec": 0, 00:08:16.939 "r_mbytes_per_sec": 0, 00:08:16.939 "w_mbytes_per_sec": 0 00:08:16.939 }, 00:08:16.939 "claimed": true, 00:08:16.939 "claim_type": "exclusive_write", 00:08:16.939 "zoned": false, 00:08:16.939 "supported_io_types": { 00:08:16.939 "read": true, 00:08:16.939 "write": true, 00:08:16.939 "unmap": true, 00:08:16.939 "flush": true, 00:08:16.939 "reset": true, 00:08:16.939 "nvme_admin": false, 00:08:16.939 "nvme_io": false, 00:08:16.939 "nvme_io_md": false, 00:08:16.939 "write_zeroes": true, 00:08:16.939 "zcopy": true, 00:08:16.939 "get_zone_info": false, 00:08:16.939 "zone_management": false, 00:08:16.939 "zone_append": false, 00:08:16.939 "compare": false, 00:08:16.939 "compare_and_write": false, 00:08:16.939 "abort": true, 00:08:16.939 "seek_hole": false, 00:08:16.939 "seek_data": false, 00:08:16.939 "copy": true, 00:08:16.939 "nvme_iov_md": false 00:08:16.939 }, 00:08:16.939 "memory_domains": [ 00:08:16.939 { 00:08:16.939 "dma_device_id": "system", 00:08:16.939 "dma_device_type": 1 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.939 "dma_device_type": 2 00:08:16.939 } 00:08:16.939 ], 00:08:16.939 "driver_specific": {} 00:08:16.939 } 00:08:16.939 ] 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.939 "name": "Existed_Raid", 00:08:16.939 "uuid": "7d5a547f-491e-4064-9b89-4b068ec51142", 00:08:16.939 "strip_size_kb": 0, 00:08:16.939 "state": "configuring", 00:08:16.939 "raid_level": "raid1", 00:08:16.939 "superblock": true, 00:08:16.939 "num_base_bdevs": 2, 00:08:16.939 "num_base_bdevs_discovered": 1, 00:08:16.939 "num_base_bdevs_operational": 2, 00:08:16.939 "base_bdevs_list": [ 00:08:16.939 { 00:08:16.939 "name": "BaseBdev1", 00:08:16.939 "uuid": "c7ec6b2c-9220-4201-9820-7c1806a8ecc4", 00:08:16.939 "is_configured": true, 00:08:16.939 "data_offset": 2048, 00:08:16.939 "data_size": 63488 00:08:16.939 }, 00:08:16.939 { 00:08:16.939 "name": "BaseBdev2", 00:08:16.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.939 "is_configured": false, 00:08:16.939 "data_offset": 0, 00:08:16.939 "data_size": 0 00:08:16.939 } 00:08:16.939 ] 00:08:16.939 }' 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.939 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.198 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:17.198 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.198 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.457 [2024-12-14 12:34:16.938285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:17.457 [2024-12-14 12:34:16.938341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.457 [2024-12-14 12:34:16.946323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:17.457 [2024-12-14 12:34:16.948123] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:17.457 [2024-12-14 12:34:16.948164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.457 12:34:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.457 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.457 "name": "Existed_Raid", 00:08:17.457 "uuid": "88ca3006-144d-45fa-b8a1-fecf8d28204f", 00:08:17.457 "strip_size_kb": 0, 00:08:17.457 "state": "configuring", 00:08:17.457 "raid_level": "raid1", 00:08:17.457 "superblock": true, 00:08:17.457 "num_base_bdevs": 2, 00:08:17.457 "num_base_bdevs_discovered": 1, 00:08:17.457 "num_base_bdevs_operational": 2, 00:08:17.457 "base_bdevs_list": [ 00:08:17.457 { 00:08:17.457 "name": "BaseBdev1", 00:08:17.457 "uuid": "c7ec6b2c-9220-4201-9820-7c1806a8ecc4", 00:08:17.457 "is_configured": true, 00:08:17.457 "data_offset": 2048, 00:08:17.457 "data_size": 63488 00:08:17.457 }, 00:08:17.457 { 00:08:17.457 "name": "BaseBdev2", 00:08:17.457 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.457 "is_configured": false, 00:08:17.457 "data_offset": 0, 00:08:17.457 "data_size": 0 00:08:17.457 } 00:08:17.457 ] 00:08:17.457 }' 00:08:17.457 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.457 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.716 [2024-12-14 12:34:17.434453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.716 [2024-12-14 12:34:17.434769] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.716 [2024-12-14 12:34:17.434821] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:17.716 [2024-12-14 12:34:17.435121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.716 BaseBdev2 00:08:17.716 [2024-12-14 12:34:17.435347] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.716 [2024-12-14 12:34:17.435364] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.716 [2024-12-14 12:34:17.435500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.716 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.976 [ 00:08:17.976 { 00:08:17.976 "name": "BaseBdev2", 00:08:17.976 "aliases": [ 00:08:17.976 "01cee47f-1e1a-4298-bb7f-cb709791a67e" 00:08:17.976 ], 00:08:17.976 "product_name": "Malloc disk", 00:08:17.976 "block_size": 512, 00:08:17.976 "num_blocks": 65536, 00:08:17.976 "uuid": "01cee47f-1e1a-4298-bb7f-cb709791a67e", 00:08:17.976 "assigned_rate_limits": { 00:08:17.976 "rw_ios_per_sec": 0, 00:08:17.976 "rw_mbytes_per_sec": 0, 00:08:17.976 "r_mbytes_per_sec": 0, 00:08:17.976 "w_mbytes_per_sec": 0 00:08:17.976 }, 00:08:17.976 "claimed": true, 00:08:17.976 "claim_type": "exclusive_write", 00:08:17.976 "zoned": false, 00:08:17.976 "supported_io_types": { 00:08:17.976 "read": true, 00:08:17.976 "write": true, 00:08:17.976 "unmap": true, 00:08:17.976 "flush": true, 00:08:17.976 "reset": true, 00:08:17.976 "nvme_admin": false, 00:08:17.976 "nvme_io": false, 00:08:17.976 "nvme_io_md": false, 00:08:17.976 "write_zeroes": true, 00:08:17.976 "zcopy": true, 00:08:17.976 "get_zone_info": false, 00:08:17.976 "zone_management": false, 00:08:17.976 "zone_append": false, 00:08:17.976 "compare": false, 00:08:17.976 "compare_and_write": false, 00:08:17.976 "abort": true, 00:08:17.976 "seek_hole": false, 00:08:17.976 "seek_data": false, 00:08:17.976 "copy": true, 00:08:17.976 "nvme_iov_md": false 00:08:17.976 }, 00:08:17.976 "memory_domains": [ 00:08:17.976 { 00:08:17.976 "dma_device_id": "system", 00:08:17.976 "dma_device_type": 1 00:08:17.976 }, 00:08:17.976 { 00:08:17.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.976 "dma_device_type": 2 00:08:17.976 } 00:08:17.976 ], 00:08:17.976 "driver_specific": {} 00:08:17.976 } 00:08:17.976 ] 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.976 "name": "Existed_Raid", 00:08:17.976 "uuid": "88ca3006-144d-45fa-b8a1-fecf8d28204f", 00:08:17.976 "strip_size_kb": 0, 00:08:17.976 "state": "online", 00:08:17.976 "raid_level": "raid1", 00:08:17.976 "superblock": true, 00:08:17.976 "num_base_bdevs": 2, 00:08:17.976 "num_base_bdevs_discovered": 2, 00:08:17.976 "num_base_bdevs_operational": 2, 00:08:17.976 "base_bdevs_list": [ 00:08:17.976 { 00:08:17.976 "name": "BaseBdev1", 00:08:17.976 "uuid": "c7ec6b2c-9220-4201-9820-7c1806a8ecc4", 00:08:17.976 "is_configured": true, 00:08:17.976 "data_offset": 2048, 00:08:17.976 "data_size": 63488 00:08:17.976 }, 00:08:17.976 { 00:08:17.976 "name": "BaseBdev2", 00:08:17.976 "uuid": "01cee47f-1e1a-4298-bb7f-cb709791a67e", 00:08:17.976 "is_configured": true, 00:08:17.976 "data_offset": 2048, 00:08:17.976 "data_size": 63488 00:08:17.976 } 00:08:17.976 ] 00:08:17.976 }' 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.976 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.236 [2024-12-14 12:34:17.909897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.236 "name": "Existed_Raid", 00:08:18.236 "aliases": [ 00:08:18.236 "88ca3006-144d-45fa-b8a1-fecf8d28204f" 00:08:18.236 ], 00:08:18.236 "product_name": "Raid Volume", 00:08:18.236 "block_size": 512, 00:08:18.236 "num_blocks": 63488, 00:08:18.236 "uuid": "88ca3006-144d-45fa-b8a1-fecf8d28204f", 00:08:18.236 "assigned_rate_limits": { 00:08:18.236 "rw_ios_per_sec": 0, 00:08:18.236 "rw_mbytes_per_sec": 0, 00:08:18.236 "r_mbytes_per_sec": 0, 00:08:18.236 "w_mbytes_per_sec": 0 00:08:18.236 }, 00:08:18.236 "claimed": false, 00:08:18.236 "zoned": false, 00:08:18.236 "supported_io_types": { 00:08:18.236 "read": true, 00:08:18.236 "write": true, 00:08:18.236 "unmap": false, 00:08:18.236 "flush": false, 00:08:18.236 "reset": true, 00:08:18.236 "nvme_admin": false, 00:08:18.236 "nvme_io": false, 00:08:18.236 "nvme_io_md": false, 00:08:18.236 "write_zeroes": true, 00:08:18.236 "zcopy": false, 00:08:18.236 "get_zone_info": false, 00:08:18.236 "zone_management": false, 00:08:18.236 "zone_append": false, 00:08:18.236 "compare": false, 00:08:18.236 "compare_and_write": false, 00:08:18.236 "abort": false, 00:08:18.236 "seek_hole": false, 00:08:18.236 "seek_data": false, 00:08:18.236 "copy": false, 00:08:18.236 "nvme_iov_md": false 00:08:18.236 }, 00:08:18.236 "memory_domains": [ 00:08:18.236 { 00:08:18.236 "dma_device_id": "system", 00:08:18.236 "dma_device_type": 1 00:08:18.236 }, 00:08:18.236 { 00:08:18.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.236 "dma_device_type": 2 00:08:18.236 }, 00:08:18.236 { 00:08:18.236 "dma_device_id": "system", 00:08:18.236 "dma_device_type": 1 00:08:18.236 }, 00:08:18.236 { 00:08:18.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.236 "dma_device_type": 2 00:08:18.236 } 00:08:18.236 ], 00:08:18.236 "driver_specific": { 00:08:18.236 "raid": { 00:08:18.236 "uuid": "88ca3006-144d-45fa-b8a1-fecf8d28204f", 00:08:18.236 "strip_size_kb": 0, 00:08:18.236 "state": "online", 00:08:18.236 "raid_level": "raid1", 00:08:18.236 "superblock": true, 00:08:18.236 "num_base_bdevs": 2, 00:08:18.236 "num_base_bdevs_discovered": 2, 00:08:18.236 "num_base_bdevs_operational": 2, 00:08:18.236 "base_bdevs_list": [ 00:08:18.236 { 00:08:18.236 "name": "BaseBdev1", 00:08:18.236 "uuid": "c7ec6b2c-9220-4201-9820-7c1806a8ecc4", 00:08:18.236 "is_configured": true, 00:08:18.236 "data_offset": 2048, 00:08:18.236 "data_size": 63488 00:08:18.236 }, 00:08:18.236 { 00:08:18.236 "name": "BaseBdev2", 00:08:18.236 "uuid": "01cee47f-1e1a-4298-bb7f-cb709791a67e", 00:08:18.236 "is_configured": true, 00:08:18.236 "data_offset": 2048, 00:08:18.236 "data_size": 63488 00:08:18.236 } 00:08:18.236 ] 00:08:18.236 } 00:08:18.236 } 00:08:18.236 }' 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:18.236 BaseBdev2' 00:08:18.236 12:34:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.496 [2024-12-14 12:34:18.121315] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.496 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.756 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.756 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.756 "name": "Existed_Raid", 00:08:18.756 "uuid": "88ca3006-144d-45fa-b8a1-fecf8d28204f", 00:08:18.756 "strip_size_kb": 0, 00:08:18.756 "state": "online", 00:08:18.756 "raid_level": "raid1", 00:08:18.756 "superblock": true, 00:08:18.756 "num_base_bdevs": 2, 00:08:18.756 "num_base_bdevs_discovered": 1, 00:08:18.756 "num_base_bdevs_operational": 1, 00:08:18.756 "base_bdevs_list": [ 00:08:18.756 { 00:08:18.756 "name": null, 00:08:18.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.756 "is_configured": false, 00:08:18.756 "data_offset": 0, 00:08:18.756 "data_size": 63488 00:08:18.756 }, 00:08:18.756 { 00:08:18.756 "name": "BaseBdev2", 00:08:18.756 "uuid": "01cee47f-1e1a-4298-bb7f-cb709791a67e", 00:08:18.756 "is_configured": true, 00:08:18.756 "data_offset": 2048, 00:08:18.756 "data_size": 63488 00:08:18.756 } 00:08:18.756 ] 00:08:18.756 }' 00:08:18.756 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.756 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.015 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.016 [2024-12-14 12:34:18.719962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:19.016 [2024-12-14 12:34:18.720084] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.275 [2024-12-14 12:34:18.813896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:19.275 [2024-12-14 12:34:18.813945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:19.275 [2024-12-14 12:34:18.813957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64758 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64758 ']' 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64758 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:19.275 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:19.276 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64758 00:08:19.276 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:19.276 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:19.276 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64758' 00:08:19.276 killing process with pid 64758 00:08:19.276 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64758 00:08:19.276 [2024-12-14 12:34:18.909910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:19.276 12:34:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64758 00:08:19.276 [2024-12-14 12:34:18.926560] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.698 12:34:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.698 00:08:20.698 real 0m5.015s 00:08:20.698 user 0m7.268s 00:08:20.698 sys 0m0.764s 00:08:20.698 12:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:20.698 12:34:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.698 ************************************ 00:08:20.698 END TEST raid_state_function_test_sb 00:08:20.698 ************************************ 00:08:20.698 12:34:20 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:20.698 12:34:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:20.698 12:34:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:20.698 12:34:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.698 ************************************ 00:08:20.698 START TEST raid_superblock_test 00:08:20.698 ************************************ 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65010 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65010 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 65010 ']' 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:20.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:20.698 12:34:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.698 [2024-12-14 12:34:20.189833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:20.698 [2024-12-14 12:34:20.189963] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65010 ] 00:08:20.698 [2024-12-14 12:34:20.360609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.957 [2024-12-14 12:34:20.474905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.957 [2024-12-14 12:34:20.675770] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.957 [2024-12-14 12:34:20.675803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.525 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.526 malloc1 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.526 [2024-12-14 12:34:21.056961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:21.526 [2024-12-14 12:34:21.057020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.526 [2024-12-14 12:34:21.057076] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:21.526 [2024-12-14 12:34:21.057086] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.526 [2024-12-14 12:34:21.059071] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.526 [2024-12-14 12:34:21.059106] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:21.526 pt1 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.526 malloc2 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.526 [2024-12-14 12:34:21.112918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:21.526 [2024-12-14 12:34:21.112974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.526 [2024-12-14 12:34:21.112997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:21.526 [2024-12-14 12:34:21.113006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.526 [2024-12-14 12:34:21.115097] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.526 [2024-12-14 12:34:21.115132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:21.526 pt2 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.526 [2024-12-14 12:34:21.124938] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.526 [2024-12-14 12:34:21.126726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:21.526 [2024-12-14 12:34:21.126899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:21.526 [2024-12-14 12:34:21.126924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:21.526 [2024-12-14 12:34:21.127184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:21.526 [2024-12-14 12:34:21.127352] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:21.526 [2024-12-14 12:34:21.127374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:21.526 [2024-12-14 12:34:21.127521] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.526 "name": "raid_bdev1", 00:08:21.526 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:21.526 "strip_size_kb": 0, 00:08:21.526 "state": "online", 00:08:21.526 "raid_level": "raid1", 00:08:21.526 "superblock": true, 00:08:21.526 "num_base_bdevs": 2, 00:08:21.526 "num_base_bdevs_discovered": 2, 00:08:21.526 "num_base_bdevs_operational": 2, 00:08:21.526 "base_bdevs_list": [ 00:08:21.526 { 00:08:21.526 "name": "pt1", 00:08:21.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.526 "is_configured": true, 00:08:21.526 "data_offset": 2048, 00:08:21.526 "data_size": 63488 00:08:21.526 }, 00:08:21.526 { 00:08:21.526 "name": "pt2", 00:08:21.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.526 "is_configured": true, 00:08:21.526 "data_offset": 2048, 00:08:21.526 "data_size": 63488 00:08:21.526 } 00:08:21.526 ] 00:08:21.526 }' 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.526 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.095 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.096 [2024-12-14 12:34:21.556471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:22.096 "name": "raid_bdev1", 00:08:22.096 "aliases": [ 00:08:22.096 "2765cb57-bbb3-4ce8-84d6-5cffa3824f74" 00:08:22.096 ], 00:08:22.096 "product_name": "Raid Volume", 00:08:22.096 "block_size": 512, 00:08:22.096 "num_blocks": 63488, 00:08:22.096 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:22.096 "assigned_rate_limits": { 00:08:22.096 "rw_ios_per_sec": 0, 00:08:22.096 "rw_mbytes_per_sec": 0, 00:08:22.096 "r_mbytes_per_sec": 0, 00:08:22.096 "w_mbytes_per_sec": 0 00:08:22.096 }, 00:08:22.096 "claimed": false, 00:08:22.096 "zoned": false, 00:08:22.096 "supported_io_types": { 00:08:22.096 "read": true, 00:08:22.096 "write": true, 00:08:22.096 "unmap": false, 00:08:22.096 "flush": false, 00:08:22.096 "reset": true, 00:08:22.096 "nvme_admin": false, 00:08:22.096 "nvme_io": false, 00:08:22.096 "nvme_io_md": false, 00:08:22.096 "write_zeroes": true, 00:08:22.096 "zcopy": false, 00:08:22.096 "get_zone_info": false, 00:08:22.096 "zone_management": false, 00:08:22.096 "zone_append": false, 00:08:22.096 "compare": false, 00:08:22.096 "compare_and_write": false, 00:08:22.096 "abort": false, 00:08:22.096 "seek_hole": false, 00:08:22.096 "seek_data": false, 00:08:22.096 "copy": false, 00:08:22.096 "nvme_iov_md": false 00:08:22.096 }, 00:08:22.096 "memory_domains": [ 00:08:22.096 { 00:08:22.096 "dma_device_id": "system", 00:08:22.096 "dma_device_type": 1 00:08:22.096 }, 00:08:22.096 { 00:08:22.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.096 "dma_device_type": 2 00:08:22.096 }, 00:08:22.096 { 00:08:22.096 "dma_device_id": "system", 00:08:22.096 "dma_device_type": 1 00:08:22.096 }, 00:08:22.096 { 00:08:22.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.096 "dma_device_type": 2 00:08:22.096 } 00:08:22.096 ], 00:08:22.096 "driver_specific": { 00:08:22.096 "raid": { 00:08:22.096 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:22.096 "strip_size_kb": 0, 00:08:22.096 "state": "online", 00:08:22.096 "raid_level": "raid1", 00:08:22.096 "superblock": true, 00:08:22.096 "num_base_bdevs": 2, 00:08:22.096 "num_base_bdevs_discovered": 2, 00:08:22.096 "num_base_bdevs_operational": 2, 00:08:22.096 "base_bdevs_list": [ 00:08:22.096 { 00:08:22.096 "name": "pt1", 00:08:22.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.096 "is_configured": true, 00:08:22.096 "data_offset": 2048, 00:08:22.096 "data_size": 63488 00:08:22.096 }, 00:08:22.096 { 00:08:22.096 "name": "pt2", 00:08:22.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.096 "is_configured": true, 00:08:22.096 "data_offset": 2048, 00:08:22.096 "data_size": 63488 00:08:22.096 } 00:08:22.096 ] 00:08:22.096 } 00:08:22.096 } 00:08:22.096 }' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:22.096 pt2' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.096 [2024-12-14 12:34:21.807998] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2765cb57-bbb3-4ce8-84d6-5cffa3824f74 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2765cb57-bbb3-4ce8-84d6-5cffa3824f74 ']' 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.096 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.355 [2024-12-14 12:34:21.831698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.355 [2024-12-14 12:34:21.831728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.355 [2024-12-14 12:34:21.831822] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.355 [2024-12-14 12:34:21.831883] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.355 [2024-12-14 12:34:21.831898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:22.355 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.356 [2024-12-14 12:34:21.939584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:22.356 [2024-12-14 12:34:21.941467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:22.356 [2024-12-14 12:34:21.941544] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:22.356 [2024-12-14 12:34:21.941601] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:22.356 [2024-12-14 12:34:21.941616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.356 [2024-12-14 12:34:21.941627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:22.356 request: 00:08:22.356 { 00:08:22.356 "name": "raid_bdev1", 00:08:22.356 "raid_level": "raid1", 00:08:22.356 "base_bdevs": [ 00:08:22.356 "malloc1", 00:08:22.356 "malloc2" 00:08:22.356 ], 00:08:22.356 "superblock": false, 00:08:22.356 "method": "bdev_raid_create", 00:08:22.356 "req_id": 1 00:08:22.356 } 00:08:22.356 Got JSON-RPC error response 00:08:22.356 response: 00:08:22.356 { 00:08:22.356 "code": -17, 00:08:22.356 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:22.356 } 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.356 [2024-12-14 12:34:21.995456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:22.356 [2024-12-14 12:34:21.995530] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.356 [2024-12-14 12:34:21.995548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:22.356 [2024-12-14 12:34:21.995559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.356 [2024-12-14 12:34:21.997889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.356 [2024-12-14 12:34:21.997930] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:22.356 [2024-12-14 12:34:21.998023] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:22.356 [2024-12-14 12:34:21.998112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:22.356 pt1 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:22.356 12:34:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.356 "name": "raid_bdev1", 00:08:22.356 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:22.356 "strip_size_kb": 0, 00:08:22.356 "state": "configuring", 00:08:22.356 "raid_level": "raid1", 00:08:22.356 "superblock": true, 00:08:22.356 "num_base_bdevs": 2, 00:08:22.356 "num_base_bdevs_discovered": 1, 00:08:22.356 "num_base_bdevs_operational": 2, 00:08:22.356 "base_bdevs_list": [ 00:08:22.356 { 00:08:22.356 "name": "pt1", 00:08:22.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.356 "is_configured": true, 00:08:22.356 "data_offset": 2048, 00:08:22.356 "data_size": 63488 00:08:22.356 }, 00:08:22.356 { 00:08:22.356 "name": null, 00:08:22.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.356 "is_configured": false, 00:08:22.356 "data_offset": 2048, 00:08:22.356 "data_size": 63488 00:08:22.356 } 00:08:22.356 ] 00:08:22.356 }' 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.356 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.923 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:22.923 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:22.923 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.923 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:22.923 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.923 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.923 [2024-12-14 12:34:22.454667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:22.923 [2024-12-14 12:34:22.454741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.923 [2024-12-14 12:34:22.454763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:22.923 [2024-12-14 12:34:22.454773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.923 [2024-12-14 12:34:22.455229] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.923 [2024-12-14 12:34:22.455259] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:22.923 [2024-12-14 12:34:22.455343] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:22.923 [2024-12-14 12:34:22.455373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:22.923 [2024-12-14 12:34:22.455509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.923 [2024-12-14 12:34:22.455528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:22.923 [2024-12-14 12:34:22.455768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:22.923 [2024-12-14 12:34:22.455938] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.923 [2024-12-14 12:34:22.455957] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.923 [2024-12-14 12:34:22.456126] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.923 pt2 00:08:22.923 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.923 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.924 "name": "raid_bdev1", 00:08:22.924 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:22.924 "strip_size_kb": 0, 00:08:22.924 "state": "online", 00:08:22.924 "raid_level": "raid1", 00:08:22.924 "superblock": true, 00:08:22.924 "num_base_bdevs": 2, 00:08:22.924 "num_base_bdevs_discovered": 2, 00:08:22.924 "num_base_bdevs_operational": 2, 00:08:22.924 "base_bdevs_list": [ 00:08:22.924 { 00:08:22.924 "name": "pt1", 00:08:22.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.924 "is_configured": true, 00:08:22.924 "data_offset": 2048, 00:08:22.924 "data_size": 63488 00:08:22.924 }, 00:08:22.924 { 00:08:22.924 "name": "pt2", 00:08:22.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.924 "is_configured": true, 00:08:22.924 "data_offset": 2048, 00:08:22.924 "data_size": 63488 00:08:22.924 } 00:08:22.924 ] 00:08:22.924 }' 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.924 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.182 [2024-12-14 12:34:22.890223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.182 12:34:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.441 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.441 "name": "raid_bdev1", 00:08:23.441 "aliases": [ 00:08:23.441 "2765cb57-bbb3-4ce8-84d6-5cffa3824f74" 00:08:23.441 ], 00:08:23.441 "product_name": "Raid Volume", 00:08:23.441 "block_size": 512, 00:08:23.441 "num_blocks": 63488, 00:08:23.441 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:23.441 "assigned_rate_limits": { 00:08:23.441 "rw_ios_per_sec": 0, 00:08:23.441 "rw_mbytes_per_sec": 0, 00:08:23.441 "r_mbytes_per_sec": 0, 00:08:23.441 "w_mbytes_per_sec": 0 00:08:23.441 }, 00:08:23.441 "claimed": false, 00:08:23.441 "zoned": false, 00:08:23.441 "supported_io_types": { 00:08:23.441 "read": true, 00:08:23.441 "write": true, 00:08:23.441 "unmap": false, 00:08:23.441 "flush": false, 00:08:23.441 "reset": true, 00:08:23.441 "nvme_admin": false, 00:08:23.441 "nvme_io": false, 00:08:23.441 "nvme_io_md": false, 00:08:23.441 "write_zeroes": true, 00:08:23.441 "zcopy": false, 00:08:23.441 "get_zone_info": false, 00:08:23.441 "zone_management": false, 00:08:23.441 "zone_append": false, 00:08:23.441 "compare": false, 00:08:23.441 "compare_and_write": false, 00:08:23.441 "abort": false, 00:08:23.441 "seek_hole": false, 00:08:23.441 "seek_data": false, 00:08:23.441 "copy": false, 00:08:23.441 "nvme_iov_md": false 00:08:23.441 }, 00:08:23.441 "memory_domains": [ 00:08:23.441 { 00:08:23.441 "dma_device_id": "system", 00:08:23.441 "dma_device_type": 1 00:08:23.441 }, 00:08:23.441 { 00:08:23.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.441 "dma_device_type": 2 00:08:23.441 }, 00:08:23.441 { 00:08:23.441 "dma_device_id": "system", 00:08:23.441 "dma_device_type": 1 00:08:23.441 }, 00:08:23.441 { 00:08:23.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.441 "dma_device_type": 2 00:08:23.441 } 00:08:23.441 ], 00:08:23.441 "driver_specific": { 00:08:23.441 "raid": { 00:08:23.441 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:23.441 "strip_size_kb": 0, 00:08:23.441 "state": "online", 00:08:23.441 "raid_level": "raid1", 00:08:23.441 "superblock": true, 00:08:23.441 "num_base_bdevs": 2, 00:08:23.441 "num_base_bdevs_discovered": 2, 00:08:23.441 "num_base_bdevs_operational": 2, 00:08:23.441 "base_bdevs_list": [ 00:08:23.441 { 00:08:23.441 "name": "pt1", 00:08:23.441 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.441 "is_configured": true, 00:08:23.441 "data_offset": 2048, 00:08:23.441 "data_size": 63488 00:08:23.441 }, 00:08:23.441 { 00:08:23.441 "name": "pt2", 00:08:23.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.441 "is_configured": true, 00:08:23.441 "data_offset": 2048, 00:08:23.441 "data_size": 63488 00:08:23.441 } 00:08:23.441 ] 00:08:23.441 } 00:08:23.441 } 00:08:23.441 }' 00:08:23.441 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.441 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:23.441 pt2' 00:08:23.441 12:34:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.441 [2024-12-14 12:34:23.137729] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2765cb57-bbb3-4ce8-84d6-5cffa3824f74 '!=' 2765cb57-bbb3-4ce8-84d6-5cffa3824f74 ']' 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.441 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.700 [2024-12-14 12:34:23.177534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.700 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.700 "name": "raid_bdev1", 00:08:23.700 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:23.700 "strip_size_kb": 0, 00:08:23.700 "state": "online", 00:08:23.700 "raid_level": "raid1", 00:08:23.700 "superblock": true, 00:08:23.700 "num_base_bdevs": 2, 00:08:23.701 "num_base_bdevs_discovered": 1, 00:08:23.701 "num_base_bdevs_operational": 1, 00:08:23.701 "base_bdevs_list": [ 00:08:23.701 { 00:08:23.701 "name": null, 00:08:23.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.701 "is_configured": false, 00:08:23.701 "data_offset": 0, 00:08:23.701 "data_size": 63488 00:08:23.701 }, 00:08:23.701 { 00:08:23.701 "name": "pt2", 00:08:23.701 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.701 "is_configured": true, 00:08:23.701 "data_offset": 2048, 00:08:23.701 "data_size": 63488 00:08:23.701 } 00:08:23.701 ] 00:08:23.701 }' 00:08:23.701 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.701 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.960 [2024-12-14 12:34:23.624726] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:23.960 [2024-12-14 12:34:23.624761] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.960 [2024-12-14 12:34:23.624843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.960 [2024-12-14 12:34:23.624891] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.960 [2024-12-14 12:34:23.624902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.960 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.219 [2024-12-14 12:34:23.700566] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:24.219 [2024-12-14 12:34:23.700626] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.220 [2024-12-14 12:34:23.700642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:24.220 [2024-12-14 12:34:23.700652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.220 [2024-12-14 12:34:23.702856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.220 [2024-12-14 12:34:23.702896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:24.220 [2024-12-14 12:34:23.702977] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:24.220 [2024-12-14 12:34:23.703057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.220 [2024-12-14 12:34:23.703169] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:24.220 [2024-12-14 12:34:23.703189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:24.220 [2024-12-14 12:34:23.703419] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:24.220 [2024-12-14 12:34:23.703576] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:24.220 [2024-12-14 12:34:23.703593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:24.220 [2024-12-14 12:34:23.703735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.220 pt2 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.220 "name": "raid_bdev1", 00:08:24.220 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:24.220 "strip_size_kb": 0, 00:08:24.220 "state": "online", 00:08:24.220 "raid_level": "raid1", 00:08:24.220 "superblock": true, 00:08:24.220 "num_base_bdevs": 2, 00:08:24.220 "num_base_bdevs_discovered": 1, 00:08:24.220 "num_base_bdevs_operational": 1, 00:08:24.220 "base_bdevs_list": [ 00:08:24.220 { 00:08:24.220 "name": null, 00:08:24.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.220 "is_configured": false, 00:08:24.220 "data_offset": 2048, 00:08:24.220 "data_size": 63488 00:08:24.220 }, 00:08:24.220 { 00:08:24.220 "name": "pt2", 00:08:24.220 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.220 "is_configured": true, 00:08:24.220 "data_offset": 2048, 00:08:24.220 "data_size": 63488 00:08:24.220 } 00:08:24.220 ] 00:08:24.220 }' 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.220 12:34:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.479 [2024-12-14 12:34:24.171757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.479 [2024-12-14 12:34:24.171792] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.479 [2024-12-14 12:34:24.171872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.479 [2024-12-14 12:34:24.171923] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.479 [2024-12-14 12:34:24.171932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.479 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.739 [2024-12-14 12:34:24.235659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:24.739 [2024-12-14 12:34:24.235719] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.739 [2024-12-14 12:34:24.235744] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:24.739 [2024-12-14 12:34:24.235754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.739 [2024-12-14 12:34:24.237882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.739 [2024-12-14 12:34:24.237918] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:24.739 [2024-12-14 12:34:24.238003] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:24.739 [2024-12-14 12:34:24.238062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:24.739 [2024-12-14 12:34:24.238232] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:24.739 [2024-12-14 12:34:24.238252] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:24.739 [2024-12-14 12:34:24.238269] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:24.739 [2024-12-14 12:34:24.238323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:24.739 [2024-12-14 12:34:24.238398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:24.739 [2024-12-14 12:34:24.238417] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:24.739 [2024-12-14 12:34:24.238659] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:24.739 [2024-12-14 12:34:24.238816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:24.739 [2024-12-14 12:34:24.238833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:24.739 [2024-12-14 12:34:24.238973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.739 pt1 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.739 "name": "raid_bdev1", 00:08:24.739 "uuid": "2765cb57-bbb3-4ce8-84d6-5cffa3824f74", 00:08:24.739 "strip_size_kb": 0, 00:08:24.739 "state": "online", 00:08:24.739 "raid_level": "raid1", 00:08:24.739 "superblock": true, 00:08:24.739 "num_base_bdevs": 2, 00:08:24.739 "num_base_bdevs_discovered": 1, 00:08:24.739 "num_base_bdevs_operational": 1, 00:08:24.739 "base_bdevs_list": [ 00:08:24.739 { 00:08:24.739 "name": null, 00:08:24.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.739 "is_configured": false, 00:08:24.739 "data_offset": 2048, 00:08:24.739 "data_size": 63488 00:08:24.739 }, 00:08:24.739 { 00:08:24.739 "name": "pt2", 00:08:24.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:24.739 "is_configured": true, 00:08:24.739 "data_offset": 2048, 00:08:24.739 "data_size": 63488 00:08:24.739 } 00:08:24.739 ] 00:08:24.739 }' 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.739 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.998 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:24.998 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.998 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.998 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:24.998 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.256 [2024-12-14 12:34:24.766989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2765cb57-bbb3-4ce8-84d6-5cffa3824f74 '!=' 2765cb57-bbb3-4ce8-84d6-5cffa3824f74 ']' 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65010 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 65010 ']' 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 65010 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65010 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.256 killing process with pid 65010 00:08:25.256 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65010' 00:08:25.257 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 65010 00:08:25.257 [2024-12-14 12:34:24.840497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.257 [2024-12-14 12:34:24.840604] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:25.257 [2024-12-14 12:34:24.840658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:25.257 [2024-12-14 12:34:24.840672] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:25.257 12:34:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 65010 00:08:25.521 [2024-12-14 12:34:25.053431] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.457 12:34:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:26.457 00:08:26.457 real 0m6.068s 00:08:26.457 user 0m9.249s 00:08:26.457 sys 0m1.019s 00:08:26.457 12:34:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.457 12:34:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.457 ************************************ 00:08:26.457 END TEST raid_superblock_test 00:08:26.457 ************************************ 00:08:26.716 12:34:26 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:26.716 12:34:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.716 12:34:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.716 12:34:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.716 ************************************ 00:08:26.716 START TEST raid_read_error_test 00:08:26.716 ************************************ 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qtdkeHV6W8 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65339 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65339 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65339 ']' 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.716 12:34:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.716 [2024-12-14 12:34:26.340138] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:26.716 [2024-12-14 12:34:26.340259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65339 ] 00:08:26.975 [2024-12-14 12:34:26.510533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.975 [2024-12-14 12:34:26.621936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.235 [2024-12-14 12:34:26.819891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.235 [2024-12-14 12:34:26.819959] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.494 BaseBdev1_malloc 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.494 true 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.494 [2024-12-14 12:34:27.225005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:27.494 [2024-12-14 12:34:27.225070] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.494 [2024-12-14 12:34:27.225090] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:27.494 [2024-12-14 12:34:27.225100] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.494 [2024-12-14 12:34:27.227146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.494 [2024-12-14 12:34:27.227187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:27.494 BaseBdev1 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.494 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.754 BaseBdev2_malloc 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.754 true 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.754 [2024-12-14 12:34:27.290666] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:27.754 [2024-12-14 12:34:27.290720] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.754 [2024-12-14 12:34:27.290735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:27.754 [2024-12-14 12:34:27.290746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.754 [2024-12-14 12:34:27.292800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.754 [2024-12-14 12:34:27.292840] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:27.754 BaseBdev2 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.754 [2024-12-14 12:34:27.302701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.754 [2024-12-14 12:34:27.304493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:27.754 [2024-12-14 12:34:27.304703] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:27.754 [2024-12-14 12:34:27.304726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:27.754 [2024-12-14 12:34:27.304952] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:27.754 [2024-12-14 12:34:27.305142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:27.754 [2024-12-14 12:34:27.305161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:27.754 [2024-12-14 12:34:27.305326] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.754 "name": "raid_bdev1", 00:08:27.754 "uuid": "105e6c34-f13d-4c7b-b4fc-7688e56b3eaa", 00:08:27.754 "strip_size_kb": 0, 00:08:27.754 "state": "online", 00:08:27.754 "raid_level": "raid1", 00:08:27.754 "superblock": true, 00:08:27.754 "num_base_bdevs": 2, 00:08:27.754 "num_base_bdevs_discovered": 2, 00:08:27.754 "num_base_bdevs_operational": 2, 00:08:27.754 "base_bdevs_list": [ 00:08:27.754 { 00:08:27.754 "name": "BaseBdev1", 00:08:27.754 "uuid": "365dad10-6392-5a81-a05f-1ab587773041", 00:08:27.754 "is_configured": true, 00:08:27.754 "data_offset": 2048, 00:08:27.754 "data_size": 63488 00:08:27.754 }, 00:08:27.754 { 00:08:27.754 "name": "BaseBdev2", 00:08:27.754 "uuid": "7b0488de-1fac-5306-a83b-5ff458cfd5d8", 00:08:27.754 "is_configured": true, 00:08:27.754 "data_offset": 2048, 00:08:27.754 "data_size": 63488 00:08:27.754 } 00:08:27.754 ] 00:08:27.754 }' 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.754 12:34:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.013 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:28.013 12:34:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:28.272 [2024-12-14 12:34:27.831234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.250 "name": "raid_bdev1", 00:08:29.250 "uuid": "105e6c34-f13d-4c7b-b4fc-7688e56b3eaa", 00:08:29.250 "strip_size_kb": 0, 00:08:29.250 "state": "online", 00:08:29.250 "raid_level": "raid1", 00:08:29.250 "superblock": true, 00:08:29.250 "num_base_bdevs": 2, 00:08:29.250 "num_base_bdevs_discovered": 2, 00:08:29.250 "num_base_bdevs_operational": 2, 00:08:29.250 "base_bdevs_list": [ 00:08:29.250 { 00:08:29.250 "name": "BaseBdev1", 00:08:29.250 "uuid": "365dad10-6392-5a81-a05f-1ab587773041", 00:08:29.250 "is_configured": true, 00:08:29.250 "data_offset": 2048, 00:08:29.250 "data_size": 63488 00:08:29.250 }, 00:08:29.250 { 00:08:29.250 "name": "BaseBdev2", 00:08:29.250 "uuid": "7b0488de-1fac-5306-a83b-5ff458cfd5d8", 00:08:29.250 "is_configured": true, 00:08:29.250 "data_offset": 2048, 00:08:29.250 "data_size": 63488 00:08:29.250 } 00:08:29.250 ] 00:08:29.250 }' 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.250 12:34:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.510 [2024-12-14 12:34:29.189316] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:29.510 [2024-12-14 12:34:29.189356] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:29.510 [2024-12-14 12:34:29.192071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.510 [2024-12-14 12:34:29.192124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:29.510 [2024-12-14 12:34:29.192209] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.510 [2024-12-14 12:34:29.192228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.510 { 00:08:29.510 "results": [ 00:08:29.510 { 00:08:29.510 "job": "raid_bdev1", 00:08:29.510 "core_mask": "0x1", 00:08:29.510 "workload": "randrw", 00:08:29.510 "percentage": 50, 00:08:29.510 "status": "finished", 00:08:29.510 "queue_depth": 1, 00:08:29.510 "io_size": 131072, 00:08:29.510 "runtime": 1.359008, 00:08:29.510 "iops": 18034.47808990087, 00:08:29.510 "mibps": 2254.3097612376087, 00:08:29.510 "io_failed": 0, 00:08:29.510 "io_timeout": 0, 00:08:29.510 "avg_latency_us": 52.814324156120534, 00:08:29.510 "min_latency_us": 22.69344978165939, 00:08:29.510 "max_latency_us": 1409.4532751091704 00:08:29.510 } 00:08:29.510 ], 00:08:29.510 "core_count": 1 00:08:29.510 } 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65339 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65339 ']' 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65339 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65339 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65339' 00:08:29.510 killing process with pid 65339 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65339 00:08:29.510 [2024-12-14 12:34:29.224430] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.510 12:34:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65339 00:08:29.770 [2024-12-14 12:34:29.359802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qtdkeHV6W8 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:31.152 00:08:31.152 real 0m4.278s 00:08:31.152 user 0m5.134s 00:08:31.152 sys 0m0.520s 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.152 12:34:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.152 ************************************ 00:08:31.152 END TEST raid_read_error_test 00:08:31.152 ************************************ 00:08:31.152 12:34:30 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:31.152 12:34:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:31.152 12:34:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.152 12:34:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.152 ************************************ 00:08:31.152 START TEST raid_write_error_test 00:08:31.152 ************************************ 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2RudypL0PR 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65480 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65480 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65480 ']' 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.152 12:34:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.152 [2024-12-14 12:34:30.686157] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:31.152 [2024-12-14 12:34:30.686286] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65480 ] 00:08:31.152 [2024-12-14 12:34:30.858789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.412 [2024-12-14 12:34:30.967198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.671 [2024-12-14 12:34:31.166975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.671 [2024-12-14 12:34:31.167058] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.931 BaseBdev1_malloc 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.931 true 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.931 [2024-12-14 12:34:31.573247] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.931 [2024-12-14 12:34:31.573300] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.931 [2024-12-14 12:34:31.573334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:31.931 [2024-12-14 12:34:31.573344] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.931 [2024-12-14 12:34:31.575407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.931 [2024-12-14 12:34:31.575446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.931 BaseBdev1 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.931 BaseBdev2_malloc 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.931 true 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.931 [2024-12-14 12:34:31.639372] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:31.931 [2024-12-14 12:34:31.639427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.931 [2024-12-14 12:34:31.639442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:31.931 [2024-12-14 12:34:31.639452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.931 [2024-12-14 12:34:31.641440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.931 [2024-12-14 12:34:31.641478] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:31.931 BaseBdev2 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.931 [2024-12-14 12:34:31.651406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.931 [2024-12-14 12:34:31.653197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.931 [2024-12-14 12:34:31.653415] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.931 [2024-12-14 12:34:31.653430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:31.931 [2024-12-14 12:34:31.653667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:31.931 [2024-12-14 12:34:31.653856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.931 [2024-12-14 12:34:31.653874] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:31.931 [2024-12-14 12:34:31.654028] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.931 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.191 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.191 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.191 "name": "raid_bdev1", 00:08:32.191 "uuid": "16e94596-69f7-4110-ba80-f3489fdc21b1", 00:08:32.191 "strip_size_kb": 0, 00:08:32.191 "state": "online", 00:08:32.191 "raid_level": "raid1", 00:08:32.191 "superblock": true, 00:08:32.191 "num_base_bdevs": 2, 00:08:32.191 "num_base_bdevs_discovered": 2, 00:08:32.191 "num_base_bdevs_operational": 2, 00:08:32.191 "base_bdevs_list": [ 00:08:32.191 { 00:08:32.191 "name": "BaseBdev1", 00:08:32.191 "uuid": "0f96d48a-8f29-5b74-bb17-3f78c2b3f2ee", 00:08:32.191 "is_configured": true, 00:08:32.191 "data_offset": 2048, 00:08:32.191 "data_size": 63488 00:08:32.191 }, 00:08:32.191 { 00:08:32.191 "name": "BaseBdev2", 00:08:32.191 "uuid": "8d01c483-5df1-585b-9758-9e743f015e6b", 00:08:32.191 "is_configured": true, 00:08:32.191 "data_offset": 2048, 00:08:32.191 "data_size": 63488 00:08:32.191 } 00:08:32.191 ] 00:08:32.191 }' 00:08:32.191 12:34:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.191 12:34:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.451 12:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:32.451 12:34:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.711 [2024-12-14 12:34:32.195776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.650 [2024-12-14 12:34:33.111873] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:33.650 [2024-12-14 12:34:33.111936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:33.650 [2024-12-14 12:34:33.112136] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.650 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.650 "name": "raid_bdev1", 00:08:33.650 "uuid": "16e94596-69f7-4110-ba80-f3489fdc21b1", 00:08:33.650 "strip_size_kb": 0, 00:08:33.650 "state": "online", 00:08:33.650 "raid_level": "raid1", 00:08:33.650 "superblock": true, 00:08:33.650 "num_base_bdevs": 2, 00:08:33.650 "num_base_bdevs_discovered": 1, 00:08:33.650 "num_base_bdevs_operational": 1, 00:08:33.650 "base_bdevs_list": [ 00:08:33.650 { 00:08:33.650 "name": null, 00:08:33.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:33.650 "is_configured": false, 00:08:33.650 "data_offset": 0, 00:08:33.650 "data_size": 63488 00:08:33.650 }, 00:08:33.650 { 00:08:33.650 "name": "BaseBdev2", 00:08:33.650 "uuid": "8d01c483-5df1-585b-9758-9e743f015e6b", 00:08:33.650 "is_configured": true, 00:08:33.650 "data_offset": 2048, 00:08:33.651 "data_size": 63488 00:08:33.651 } 00:08:33.651 ] 00:08:33.651 }' 00:08:33.651 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.651 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.911 [2024-12-14 12:34:33.580850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.911 [2024-12-14 12:34:33.580885] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.911 [2024-12-14 12:34:33.583616] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.911 [2024-12-14 12:34:33.583661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.911 [2024-12-14 12:34:33.583719] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.911 [2024-12-14 12:34:33.583731] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65480 00:08:33.911 { 00:08:33.911 "results": [ 00:08:33.911 { 00:08:33.911 "job": "raid_bdev1", 00:08:33.911 "core_mask": "0x1", 00:08:33.911 "workload": "randrw", 00:08:33.911 "percentage": 50, 00:08:33.911 "status": "finished", 00:08:33.911 "queue_depth": 1, 00:08:33.911 "io_size": 131072, 00:08:33.911 "runtime": 1.386017, 00:08:33.911 "iops": 20607.972340887594, 00:08:33.911 "mibps": 2575.9965426109493, 00:08:33.911 "io_failed": 0, 00:08:33.911 "io_timeout": 0, 00:08:33.911 "avg_latency_us": 45.8299927212152, 00:08:33.911 "min_latency_us": 23.02882096069869, 00:08:33.911 "max_latency_us": 1423.7624454148472 00:08:33.911 } 00:08:33.911 ], 00:08:33.911 "core_count": 1 00:08:33.911 } 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65480 ']' 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65480 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65480 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.911 killing process with pid 65480 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65480' 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65480 00:08:33.911 [2024-12-14 12:34:33.626907] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.911 12:34:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65480 00:08:34.171 [2024-12-14 12:34:33.757654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2RudypL0PR 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:35.552 00:08:35.552 real 0m4.344s 00:08:35.552 user 0m5.233s 00:08:35.552 sys 0m0.530s 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.552 12:34:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.552 ************************************ 00:08:35.552 END TEST raid_write_error_test 00:08:35.552 ************************************ 00:08:35.552 12:34:34 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:35.552 12:34:34 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:35.552 12:34:34 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:35.552 12:34:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:35.552 12:34:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:35.552 12:34:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.552 ************************************ 00:08:35.552 START TEST raid_state_function_test 00:08:35.552 ************************************ 00:08:35.552 12:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:35.552 12:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:35.552 12:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:35.552 12:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:35.552 12:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:35.552 12:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:35.552 12:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65618 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65618' 00:08:35.552 Process raid pid: 65618 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65618 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65618 ']' 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.552 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.552 [2024-12-14 12:34:35.090395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:35.552 [2024-12-14 12:34:35.090600] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.552 [2024-12-14 12:34:35.263444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.811 [2024-12-14 12:34:35.377567] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.071 [2024-12-14 12:34:35.583188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.071 [2024-12-14 12:34:35.583271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.331 [2024-12-14 12:34:35.930064] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.331 [2024-12-14 12:34:35.930138] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.331 [2024-12-14 12:34:35.930148] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.331 [2024-12-14 12:34:35.930158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.331 [2024-12-14 12:34:35.930164] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:36.331 [2024-12-14 12:34:35.930172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.331 "name": "Existed_Raid", 00:08:36.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.331 "strip_size_kb": 64, 00:08:36.331 "state": "configuring", 00:08:36.331 "raid_level": "raid0", 00:08:36.331 "superblock": false, 00:08:36.331 "num_base_bdevs": 3, 00:08:36.331 "num_base_bdevs_discovered": 0, 00:08:36.331 "num_base_bdevs_operational": 3, 00:08:36.331 "base_bdevs_list": [ 00:08:36.331 { 00:08:36.331 "name": "BaseBdev1", 00:08:36.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.331 "is_configured": false, 00:08:36.331 "data_offset": 0, 00:08:36.331 "data_size": 0 00:08:36.331 }, 00:08:36.331 { 00:08:36.331 "name": "BaseBdev2", 00:08:36.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.331 "is_configured": false, 00:08:36.331 "data_offset": 0, 00:08:36.331 "data_size": 0 00:08:36.331 }, 00:08:36.331 { 00:08:36.331 "name": "BaseBdev3", 00:08:36.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.331 "is_configured": false, 00:08:36.331 "data_offset": 0, 00:08:36.331 "data_size": 0 00:08:36.331 } 00:08:36.331 ] 00:08:36.331 }' 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.331 12:34:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.898 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.898 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.898 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.898 [2024-12-14 12:34:36.345322] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.898 [2024-12-14 12:34:36.345428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:36.898 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.898 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:36.898 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.899 [2024-12-14 12:34:36.357319] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.899 [2024-12-14 12:34:36.357451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.899 [2024-12-14 12:34:36.357481] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.899 [2024-12-14 12:34:36.357503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.899 [2024-12-14 12:34:36.357521] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:36.899 [2024-12-14 12:34:36.357541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.899 [2024-12-14 12:34:36.403134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.899 BaseBdev1 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.899 [ 00:08:36.899 { 00:08:36.899 "name": "BaseBdev1", 00:08:36.899 "aliases": [ 00:08:36.899 "178533ae-6156-49b0-8bd5-1c3d9dbd84d6" 00:08:36.899 ], 00:08:36.899 "product_name": "Malloc disk", 00:08:36.899 "block_size": 512, 00:08:36.899 "num_blocks": 65536, 00:08:36.899 "uuid": "178533ae-6156-49b0-8bd5-1c3d9dbd84d6", 00:08:36.899 "assigned_rate_limits": { 00:08:36.899 "rw_ios_per_sec": 0, 00:08:36.899 "rw_mbytes_per_sec": 0, 00:08:36.899 "r_mbytes_per_sec": 0, 00:08:36.899 "w_mbytes_per_sec": 0 00:08:36.899 }, 00:08:36.899 "claimed": true, 00:08:36.899 "claim_type": "exclusive_write", 00:08:36.899 "zoned": false, 00:08:36.899 "supported_io_types": { 00:08:36.899 "read": true, 00:08:36.899 "write": true, 00:08:36.899 "unmap": true, 00:08:36.899 "flush": true, 00:08:36.899 "reset": true, 00:08:36.899 "nvme_admin": false, 00:08:36.899 "nvme_io": false, 00:08:36.899 "nvme_io_md": false, 00:08:36.899 "write_zeroes": true, 00:08:36.899 "zcopy": true, 00:08:36.899 "get_zone_info": false, 00:08:36.899 "zone_management": false, 00:08:36.899 "zone_append": false, 00:08:36.899 "compare": false, 00:08:36.899 "compare_and_write": false, 00:08:36.899 "abort": true, 00:08:36.899 "seek_hole": false, 00:08:36.899 "seek_data": false, 00:08:36.899 "copy": true, 00:08:36.899 "nvme_iov_md": false 00:08:36.899 }, 00:08:36.899 "memory_domains": [ 00:08:36.899 { 00:08:36.899 "dma_device_id": "system", 00:08:36.899 "dma_device_type": 1 00:08:36.899 }, 00:08:36.899 { 00:08:36.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.899 "dma_device_type": 2 00:08:36.899 } 00:08:36.899 ], 00:08:36.899 "driver_specific": {} 00:08:36.899 } 00:08:36.899 ] 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.899 "name": "Existed_Raid", 00:08:36.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.899 "strip_size_kb": 64, 00:08:36.899 "state": "configuring", 00:08:36.899 "raid_level": "raid0", 00:08:36.899 "superblock": false, 00:08:36.899 "num_base_bdevs": 3, 00:08:36.899 "num_base_bdevs_discovered": 1, 00:08:36.899 "num_base_bdevs_operational": 3, 00:08:36.899 "base_bdevs_list": [ 00:08:36.899 { 00:08:36.899 "name": "BaseBdev1", 00:08:36.899 "uuid": "178533ae-6156-49b0-8bd5-1c3d9dbd84d6", 00:08:36.899 "is_configured": true, 00:08:36.899 "data_offset": 0, 00:08:36.899 "data_size": 65536 00:08:36.899 }, 00:08:36.899 { 00:08:36.899 "name": "BaseBdev2", 00:08:36.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.899 "is_configured": false, 00:08:36.899 "data_offset": 0, 00:08:36.899 "data_size": 0 00:08:36.899 }, 00:08:36.899 { 00:08:36.899 "name": "BaseBdev3", 00:08:36.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.899 "is_configured": false, 00:08:36.899 "data_offset": 0, 00:08:36.899 "data_size": 0 00:08:36.899 } 00:08:36.899 ] 00:08:36.899 }' 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.899 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.466 [2024-12-14 12:34:36.902313] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.466 [2024-12-14 12:34:36.902371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.466 [2024-12-14 12:34:36.910336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.466 [2024-12-14 12:34:36.912105] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.466 [2024-12-14 12:34:36.912141] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.466 [2024-12-14 12:34:36.912150] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:37.466 [2024-12-14 12:34:36.912158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.466 "name": "Existed_Raid", 00:08:37.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.466 "strip_size_kb": 64, 00:08:37.466 "state": "configuring", 00:08:37.466 "raid_level": "raid0", 00:08:37.466 "superblock": false, 00:08:37.466 "num_base_bdevs": 3, 00:08:37.466 "num_base_bdevs_discovered": 1, 00:08:37.466 "num_base_bdevs_operational": 3, 00:08:37.466 "base_bdevs_list": [ 00:08:37.466 { 00:08:37.466 "name": "BaseBdev1", 00:08:37.466 "uuid": "178533ae-6156-49b0-8bd5-1c3d9dbd84d6", 00:08:37.466 "is_configured": true, 00:08:37.466 "data_offset": 0, 00:08:37.466 "data_size": 65536 00:08:37.466 }, 00:08:37.466 { 00:08:37.466 "name": "BaseBdev2", 00:08:37.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.466 "is_configured": false, 00:08:37.466 "data_offset": 0, 00:08:37.466 "data_size": 0 00:08:37.466 }, 00:08:37.466 { 00:08:37.466 "name": "BaseBdev3", 00:08:37.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.466 "is_configured": false, 00:08:37.466 "data_offset": 0, 00:08:37.466 "data_size": 0 00:08:37.466 } 00:08:37.466 ] 00:08:37.466 }' 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.466 12:34:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.724 [2024-12-14 12:34:37.406191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:37.724 BaseBdev2 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.724 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.724 [ 00:08:37.724 { 00:08:37.724 "name": "BaseBdev2", 00:08:37.724 "aliases": [ 00:08:37.724 "98756b5d-888a-4c49-afdb-50b22a6c1702" 00:08:37.724 ], 00:08:37.724 "product_name": "Malloc disk", 00:08:37.724 "block_size": 512, 00:08:37.724 "num_blocks": 65536, 00:08:37.724 "uuid": "98756b5d-888a-4c49-afdb-50b22a6c1702", 00:08:37.724 "assigned_rate_limits": { 00:08:37.725 "rw_ios_per_sec": 0, 00:08:37.725 "rw_mbytes_per_sec": 0, 00:08:37.725 "r_mbytes_per_sec": 0, 00:08:37.725 "w_mbytes_per_sec": 0 00:08:37.725 }, 00:08:37.725 "claimed": true, 00:08:37.725 "claim_type": "exclusive_write", 00:08:37.725 "zoned": false, 00:08:37.725 "supported_io_types": { 00:08:37.725 "read": true, 00:08:37.725 "write": true, 00:08:37.725 "unmap": true, 00:08:37.725 "flush": true, 00:08:37.725 "reset": true, 00:08:37.725 "nvme_admin": false, 00:08:37.725 "nvme_io": false, 00:08:37.725 "nvme_io_md": false, 00:08:37.725 "write_zeroes": true, 00:08:37.725 "zcopy": true, 00:08:37.725 "get_zone_info": false, 00:08:37.725 "zone_management": false, 00:08:37.725 "zone_append": false, 00:08:37.725 "compare": false, 00:08:37.725 "compare_and_write": false, 00:08:37.725 "abort": true, 00:08:37.725 "seek_hole": false, 00:08:37.725 "seek_data": false, 00:08:37.725 "copy": true, 00:08:37.725 "nvme_iov_md": false 00:08:37.725 }, 00:08:37.725 "memory_domains": [ 00:08:37.725 { 00:08:37.725 "dma_device_id": "system", 00:08:37.725 "dma_device_type": 1 00:08:37.725 }, 00:08:37.725 { 00:08:37.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.725 "dma_device_type": 2 00:08:37.725 } 00:08:37.725 ], 00:08:37.725 "driver_specific": {} 00:08:37.725 } 00:08:37.725 ] 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.725 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.984 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.984 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.984 "name": "Existed_Raid", 00:08:37.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.985 "strip_size_kb": 64, 00:08:37.985 "state": "configuring", 00:08:37.985 "raid_level": "raid0", 00:08:37.985 "superblock": false, 00:08:37.985 "num_base_bdevs": 3, 00:08:37.985 "num_base_bdevs_discovered": 2, 00:08:37.985 "num_base_bdevs_operational": 3, 00:08:37.985 "base_bdevs_list": [ 00:08:37.985 { 00:08:37.985 "name": "BaseBdev1", 00:08:37.985 "uuid": "178533ae-6156-49b0-8bd5-1c3d9dbd84d6", 00:08:37.985 "is_configured": true, 00:08:37.985 "data_offset": 0, 00:08:37.985 "data_size": 65536 00:08:37.985 }, 00:08:37.985 { 00:08:37.985 "name": "BaseBdev2", 00:08:37.985 "uuid": "98756b5d-888a-4c49-afdb-50b22a6c1702", 00:08:37.985 "is_configured": true, 00:08:37.985 "data_offset": 0, 00:08:37.985 "data_size": 65536 00:08:37.985 }, 00:08:37.985 { 00:08:37.985 "name": "BaseBdev3", 00:08:37.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.985 "is_configured": false, 00:08:37.985 "data_offset": 0, 00:08:37.985 "data_size": 0 00:08:37.985 } 00:08:37.985 ] 00:08:37.985 }' 00:08:37.985 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.985 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.244 [2024-12-14 12:34:37.963962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.244 [2024-12-14 12:34:37.964126] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.244 [2024-12-14 12:34:37.964161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:38.244 [2024-12-14 12:34:37.964473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:38.244 [2024-12-14 12:34:37.964683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.244 [2024-12-14 12:34:37.964727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:38.244 [2024-12-14 12:34:37.965039] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.244 BaseBdev3 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.244 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.506 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.506 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.506 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.506 12:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.506 [ 00:08:38.506 { 00:08:38.506 "name": "BaseBdev3", 00:08:38.506 "aliases": [ 00:08:38.506 "65bb6b04-5198-4494-a3d9-6cafd296ac14" 00:08:38.506 ], 00:08:38.506 "product_name": "Malloc disk", 00:08:38.506 "block_size": 512, 00:08:38.506 "num_blocks": 65536, 00:08:38.506 "uuid": "65bb6b04-5198-4494-a3d9-6cafd296ac14", 00:08:38.506 "assigned_rate_limits": { 00:08:38.506 "rw_ios_per_sec": 0, 00:08:38.506 "rw_mbytes_per_sec": 0, 00:08:38.506 "r_mbytes_per_sec": 0, 00:08:38.506 "w_mbytes_per_sec": 0 00:08:38.506 }, 00:08:38.506 "claimed": true, 00:08:38.506 "claim_type": "exclusive_write", 00:08:38.506 "zoned": false, 00:08:38.506 "supported_io_types": { 00:08:38.506 "read": true, 00:08:38.506 "write": true, 00:08:38.506 "unmap": true, 00:08:38.506 "flush": true, 00:08:38.506 "reset": true, 00:08:38.506 "nvme_admin": false, 00:08:38.506 "nvme_io": false, 00:08:38.506 "nvme_io_md": false, 00:08:38.506 "write_zeroes": true, 00:08:38.506 "zcopy": true, 00:08:38.506 "get_zone_info": false, 00:08:38.506 "zone_management": false, 00:08:38.506 "zone_append": false, 00:08:38.506 "compare": false, 00:08:38.506 "compare_and_write": false, 00:08:38.506 "abort": true, 00:08:38.506 "seek_hole": false, 00:08:38.506 "seek_data": false, 00:08:38.506 "copy": true, 00:08:38.506 "nvme_iov_md": false 00:08:38.506 }, 00:08:38.506 "memory_domains": [ 00:08:38.506 { 00:08:38.506 "dma_device_id": "system", 00:08:38.506 "dma_device_type": 1 00:08:38.506 }, 00:08:38.506 { 00:08:38.506 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.506 "dma_device_type": 2 00:08:38.506 } 00:08:38.506 ], 00:08:38.506 "driver_specific": {} 00:08:38.506 } 00:08:38.506 ] 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.506 "name": "Existed_Raid", 00:08:38.506 "uuid": "ed275ac6-725e-460d-ad82-cc9fc146375f", 00:08:38.506 "strip_size_kb": 64, 00:08:38.506 "state": "online", 00:08:38.506 "raid_level": "raid0", 00:08:38.506 "superblock": false, 00:08:38.506 "num_base_bdevs": 3, 00:08:38.506 "num_base_bdevs_discovered": 3, 00:08:38.506 "num_base_bdevs_operational": 3, 00:08:38.506 "base_bdevs_list": [ 00:08:38.506 { 00:08:38.506 "name": "BaseBdev1", 00:08:38.506 "uuid": "178533ae-6156-49b0-8bd5-1c3d9dbd84d6", 00:08:38.506 "is_configured": true, 00:08:38.506 "data_offset": 0, 00:08:38.506 "data_size": 65536 00:08:38.506 }, 00:08:38.506 { 00:08:38.506 "name": "BaseBdev2", 00:08:38.506 "uuid": "98756b5d-888a-4c49-afdb-50b22a6c1702", 00:08:38.506 "is_configured": true, 00:08:38.506 "data_offset": 0, 00:08:38.506 "data_size": 65536 00:08:38.506 }, 00:08:38.506 { 00:08:38.506 "name": "BaseBdev3", 00:08:38.506 "uuid": "65bb6b04-5198-4494-a3d9-6cafd296ac14", 00:08:38.506 "is_configured": true, 00:08:38.506 "data_offset": 0, 00:08:38.506 "data_size": 65536 00:08:38.506 } 00:08:38.506 ] 00:08:38.506 }' 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.506 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.765 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.765 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.765 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.765 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.765 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.765 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.765 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.765 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.766 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.766 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.766 [2024-12-14 12:34:38.419570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.766 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.766 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.766 "name": "Existed_Raid", 00:08:38.766 "aliases": [ 00:08:38.766 "ed275ac6-725e-460d-ad82-cc9fc146375f" 00:08:38.766 ], 00:08:38.766 "product_name": "Raid Volume", 00:08:38.766 "block_size": 512, 00:08:38.766 "num_blocks": 196608, 00:08:38.766 "uuid": "ed275ac6-725e-460d-ad82-cc9fc146375f", 00:08:38.766 "assigned_rate_limits": { 00:08:38.766 "rw_ios_per_sec": 0, 00:08:38.766 "rw_mbytes_per_sec": 0, 00:08:38.766 "r_mbytes_per_sec": 0, 00:08:38.766 "w_mbytes_per_sec": 0 00:08:38.766 }, 00:08:38.766 "claimed": false, 00:08:38.766 "zoned": false, 00:08:38.766 "supported_io_types": { 00:08:38.766 "read": true, 00:08:38.766 "write": true, 00:08:38.766 "unmap": true, 00:08:38.766 "flush": true, 00:08:38.766 "reset": true, 00:08:38.766 "nvme_admin": false, 00:08:38.766 "nvme_io": false, 00:08:38.766 "nvme_io_md": false, 00:08:38.766 "write_zeroes": true, 00:08:38.766 "zcopy": false, 00:08:38.766 "get_zone_info": false, 00:08:38.766 "zone_management": false, 00:08:38.766 "zone_append": false, 00:08:38.766 "compare": false, 00:08:38.766 "compare_and_write": false, 00:08:38.766 "abort": false, 00:08:38.766 "seek_hole": false, 00:08:38.766 "seek_data": false, 00:08:38.766 "copy": false, 00:08:38.766 "nvme_iov_md": false 00:08:38.766 }, 00:08:38.766 "memory_domains": [ 00:08:38.766 { 00:08:38.766 "dma_device_id": "system", 00:08:38.766 "dma_device_type": 1 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.766 "dma_device_type": 2 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "dma_device_id": "system", 00:08:38.766 "dma_device_type": 1 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.766 "dma_device_type": 2 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "dma_device_id": "system", 00:08:38.766 "dma_device_type": 1 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.766 "dma_device_type": 2 00:08:38.766 } 00:08:38.766 ], 00:08:38.766 "driver_specific": { 00:08:38.766 "raid": { 00:08:38.766 "uuid": "ed275ac6-725e-460d-ad82-cc9fc146375f", 00:08:38.766 "strip_size_kb": 64, 00:08:38.766 "state": "online", 00:08:38.766 "raid_level": "raid0", 00:08:38.766 "superblock": false, 00:08:38.766 "num_base_bdevs": 3, 00:08:38.766 "num_base_bdevs_discovered": 3, 00:08:38.766 "num_base_bdevs_operational": 3, 00:08:38.766 "base_bdevs_list": [ 00:08:38.766 { 00:08:38.766 "name": "BaseBdev1", 00:08:38.766 "uuid": "178533ae-6156-49b0-8bd5-1c3d9dbd84d6", 00:08:38.766 "is_configured": true, 00:08:38.766 "data_offset": 0, 00:08:38.766 "data_size": 65536 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "name": "BaseBdev2", 00:08:38.766 "uuid": "98756b5d-888a-4c49-afdb-50b22a6c1702", 00:08:38.766 "is_configured": true, 00:08:38.766 "data_offset": 0, 00:08:38.766 "data_size": 65536 00:08:38.766 }, 00:08:38.766 { 00:08:38.766 "name": "BaseBdev3", 00:08:38.766 "uuid": "65bb6b04-5198-4494-a3d9-6cafd296ac14", 00:08:38.766 "is_configured": true, 00:08:38.766 "data_offset": 0, 00:08:38.766 "data_size": 65536 00:08:38.766 } 00:08:38.766 ] 00:08:38.766 } 00:08:38.766 } 00:08:38.766 }' 00:08:38.766 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:39.026 BaseBdev2 00:08:39.026 BaseBdev3' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.026 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.026 [2024-12-14 12:34:38.706818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:39.026 [2024-12-14 12:34:38.706890] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:39.026 [2024-12-14 12:34:38.706965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.286 "name": "Existed_Raid", 00:08:39.286 "uuid": "ed275ac6-725e-460d-ad82-cc9fc146375f", 00:08:39.286 "strip_size_kb": 64, 00:08:39.286 "state": "offline", 00:08:39.286 "raid_level": "raid0", 00:08:39.286 "superblock": false, 00:08:39.286 "num_base_bdevs": 3, 00:08:39.286 "num_base_bdevs_discovered": 2, 00:08:39.286 "num_base_bdevs_operational": 2, 00:08:39.286 "base_bdevs_list": [ 00:08:39.286 { 00:08:39.286 "name": null, 00:08:39.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.286 "is_configured": false, 00:08:39.286 "data_offset": 0, 00:08:39.286 "data_size": 65536 00:08:39.286 }, 00:08:39.286 { 00:08:39.286 "name": "BaseBdev2", 00:08:39.286 "uuid": "98756b5d-888a-4c49-afdb-50b22a6c1702", 00:08:39.286 "is_configured": true, 00:08:39.286 "data_offset": 0, 00:08:39.286 "data_size": 65536 00:08:39.286 }, 00:08:39.286 { 00:08:39.286 "name": "BaseBdev3", 00:08:39.286 "uuid": "65bb6b04-5198-4494-a3d9-6cafd296ac14", 00:08:39.286 "is_configured": true, 00:08:39.286 "data_offset": 0, 00:08:39.286 "data_size": 65536 00:08:39.286 } 00:08:39.286 ] 00:08:39.286 }' 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.286 12:34:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:39.546 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:39.547 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:39.547 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.547 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.547 [2024-12-14 12:34:39.255392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:39.806 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.806 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:39.806 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.806 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.806 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:39.806 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 [2024-12-14 12:34:39.410791] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.807 [2024-12-14 12:34:39.410840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.807 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.067 BaseBdev2 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.067 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.067 [ 00:08:40.067 { 00:08:40.067 "name": "BaseBdev2", 00:08:40.067 "aliases": [ 00:08:40.067 "786f6f11-3a3b-492d-a867-84018e8ca806" 00:08:40.067 ], 00:08:40.067 "product_name": "Malloc disk", 00:08:40.067 "block_size": 512, 00:08:40.067 "num_blocks": 65536, 00:08:40.067 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:40.067 "assigned_rate_limits": { 00:08:40.067 "rw_ios_per_sec": 0, 00:08:40.067 "rw_mbytes_per_sec": 0, 00:08:40.067 "r_mbytes_per_sec": 0, 00:08:40.067 "w_mbytes_per_sec": 0 00:08:40.067 }, 00:08:40.067 "claimed": false, 00:08:40.067 "zoned": false, 00:08:40.067 "supported_io_types": { 00:08:40.067 "read": true, 00:08:40.067 "write": true, 00:08:40.067 "unmap": true, 00:08:40.067 "flush": true, 00:08:40.067 "reset": true, 00:08:40.067 "nvme_admin": false, 00:08:40.067 "nvme_io": false, 00:08:40.067 "nvme_io_md": false, 00:08:40.067 "write_zeroes": true, 00:08:40.068 "zcopy": true, 00:08:40.068 "get_zone_info": false, 00:08:40.068 "zone_management": false, 00:08:40.068 "zone_append": false, 00:08:40.068 "compare": false, 00:08:40.068 "compare_and_write": false, 00:08:40.068 "abort": true, 00:08:40.068 "seek_hole": false, 00:08:40.068 "seek_data": false, 00:08:40.068 "copy": true, 00:08:40.068 "nvme_iov_md": false 00:08:40.068 }, 00:08:40.068 "memory_domains": [ 00:08:40.068 { 00:08:40.068 "dma_device_id": "system", 00:08:40.068 "dma_device_type": 1 00:08:40.068 }, 00:08:40.068 { 00:08:40.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.068 "dma_device_type": 2 00:08:40.068 } 00:08:40.068 ], 00:08:40.068 "driver_specific": {} 00:08:40.068 } 00:08:40.068 ] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.068 BaseBdev3 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.068 [ 00:08:40.068 { 00:08:40.068 "name": "BaseBdev3", 00:08:40.068 "aliases": [ 00:08:40.068 "b40a38cd-07ab-4d01-9992-c563e2f8d737" 00:08:40.068 ], 00:08:40.068 "product_name": "Malloc disk", 00:08:40.068 "block_size": 512, 00:08:40.068 "num_blocks": 65536, 00:08:40.068 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:40.068 "assigned_rate_limits": { 00:08:40.068 "rw_ios_per_sec": 0, 00:08:40.068 "rw_mbytes_per_sec": 0, 00:08:40.068 "r_mbytes_per_sec": 0, 00:08:40.068 "w_mbytes_per_sec": 0 00:08:40.068 }, 00:08:40.068 "claimed": false, 00:08:40.068 "zoned": false, 00:08:40.068 "supported_io_types": { 00:08:40.068 "read": true, 00:08:40.068 "write": true, 00:08:40.068 "unmap": true, 00:08:40.068 "flush": true, 00:08:40.068 "reset": true, 00:08:40.068 "nvme_admin": false, 00:08:40.068 "nvme_io": false, 00:08:40.068 "nvme_io_md": false, 00:08:40.068 "write_zeroes": true, 00:08:40.068 "zcopy": true, 00:08:40.068 "get_zone_info": false, 00:08:40.068 "zone_management": false, 00:08:40.068 "zone_append": false, 00:08:40.068 "compare": false, 00:08:40.068 "compare_and_write": false, 00:08:40.068 "abort": true, 00:08:40.068 "seek_hole": false, 00:08:40.068 "seek_data": false, 00:08:40.068 "copy": true, 00:08:40.068 "nvme_iov_md": false 00:08:40.068 }, 00:08:40.068 "memory_domains": [ 00:08:40.068 { 00:08:40.068 "dma_device_id": "system", 00:08:40.068 "dma_device_type": 1 00:08:40.068 }, 00:08:40.068 { 00:08:40.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.068 "dma_device_type": 2 00:08:40.068 } 00:08:40.068 ], 00:08:40.068 "driver_specific": {} 00:08:40.068 } 00:08:40.068 ] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.068 [2024-12-14 12:34:39.713365] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.068 [2024-12-14 12:34:39.713477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.068 [2024-12-14 12:34:39.713527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:40.068 [2024-12-14 12:34:39.715471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.068 "name": "Existed_Raid", 00:08:40.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.068 "strip_size_kb": 64, 00:08:40.068 "state": "configuring", 00:08:40.068 "raid_level": "raid0", 00:08:40.068 "superblock": false, 00:08:40.068 "num_base_bdevs": 3, 00:08:40.068 "num_base_bdevs_discovered": 2, 00:08:40.068 "num_base_bdevs_operational": 3, 00:08:40.068 "base_bdevs_list": [ 00:08:40.068 { 00:08:40.068 "name": "BaseBdev1", 00:08:40.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.068 "is_configured": false, 00:08:40.068 "data_offset": 0, 00:08:40.068 "data_size": 0 00:08:40.068 }, 00:08:40.068 { 00:08:40.068 "name": "BaseBdev2", 00:08:40.068 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:40.068 "is_configured": true, 00:08:40.068 "data_offset": 0, 00:08:40.068 "data_size": 65536 00:08:40.068 }, 00:08:40.068 { 00:08:40.068 "name": "BaseBdev3", 00:08:40.068 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:40.068 "is_configured": true, 00:08:40.068 "data_offset": 0, 00:08:40.068 "data_size": 65536 00:08:40.068 } 00:08:40.068 ] 00:08:40.068 }' 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.068 12:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.638 [2024-12-14 12:34:40.108695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.638 "name": "Existed_Raid", 00:08:40.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.638 "strip_size_kb": 64, 00:08:40.638 "state": "configuring", 00:08:40.638 "raid_level": "raid0", 00:08:40.638 "superblock": false, 00:08:40.638 "num_base_bdevs": 3, 00:08:40.638 "num_base_bdevs_discovered": 1, 00:08:40.638 "num_base_bdevs_operational": 3, 00:08:40.638 "base_bdevs_list": [ 00:08:40.638 { 00:08:40.638 "name": "BaseBdev1", 00:08:40.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.638 "is_configured": false, 00:08:40.638 "data_offset": 0, 00:08:40.638 "data_size": 0 00:08:40.638 }, 00:08:40.638 { 00:08:40.638 "name": null, 00:08:40.638 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:40.638 "is_configured": false, 00:08:40.638 "data_offset": 0, 00:08:40.638 "data_size": 65536 00:08:40.638 }, 00:08:40.638 { 00:08:40.638 "name": "BaseBdev3", 00:08:40.638 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:40.638 "is_configured": true, 00:08:40.638 "data_offset": 0, 00:08:40.638 "data_size": 65536 00:08:40.638 } 00:08:40.638 ] 00:08:40.638 }' 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.638 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.898 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.156 [2024-12-14 12:34:40.655310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.156 BaseBdev1 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.156 [ 00:08:41.156 { 00:08:41.156 "name": "BaseBdev1", 00:08:41.156 "aliases": [ 00:08:41.156 "a13b4d53-6d6a-46a4-9348-d7d91166fce3" 00:08:41.156 ], 00:08:41.156 "product_name": "Malloc disk", 00:08:41.156 "block_size": 512, 00:08:41.156 "num_blocks": 65536, 00:08:41.156 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:41.156 "assigned_rate_limits": { 00:08:41.156 "rw_ios_per_sec": 0, 00:08:41.156 "rw_mbytes_per_sec": 0, 00:08:41.156 "r_mbytes_per_sec": 0, 00:08:41.156 "w_mbytes_per_sec": 0 00:08:41.156 }, 00:08:41.156 "claimed": true, 00:08:41.156 "claim_type": "exclusive_write", 00:08:41.156 "zoned": false, 00:08:41.156 "supported_io_types": { 00:08:41.156 "read": true, 00:08:41.156 "write": true, 00:08:41.156 "unmap": true, 00:08:41.156 "flush": true, 00:08:41.156 "reset": true, 00:08:41.156 "nvme_admin": false, 00:08:41.156 "nvme_io": false, 00:08:41.156 "nvme_io_md": false, 00:08:41.156 "write_zeroes": true, 00:08:41.156 "zcopy": true, 00:08:41.156 "get_zone_info": false, 00:08:41.156 "zone_management": false, 00:08:41.156 "zone_append": false, 00:08:41.156 "compare": false, 00:08:41.156 "compare_and_write": false, 00:08:41.156 "abort": true, 00:08:41.156 "seek_hole": false, 00:08:41.156 "seek_data": false, 00:08:41.156 "copy": true, 00:08:41.156 "nvme_iov_md": false 00:08:41.156 }, 00:08:41.156 "memory_domains": [ 00:08:41.156 { 00:08:41.156 "dma_device_id": "system", 00:08:41.156 "dma_device_type": 1 00:08:41.156 }, 00:08:41.156 { 00:08:41.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.156 "dma_device_type": 2 00:08:41.156 } 00:08:41.156 ], 00:08:41.156 "driver_specific": {} 00:08:41.156 } 00:08:41.156 ] 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.156 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.157 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.157 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.157 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.157 "name": "Existed_Raid", 00:08:41.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.157 "strip_size_kb": 64, 00:08:41.157 "state": "configuring", 00:08:41.157 "raid_level": "raid0", 00:08:41.157 "superblock": false, 00:08:41.157 "num_base_bdevs": 3, 00:08:41.157 "num_base_bdevs_discovered": 2, 00:08:41.157 "num_base_bdevs_operational": 3, 00:08:41.157 "base_bdevs_list": [ 00:08:41.157 { 00:08:41.157 "name": "BaseBdev1", 00:08:41.157 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:41.157 "is_configured": true, 00:08:41.157 "data_offset": 0, 00:08:41.157 "data_size": 65536 00:08:41.157 }, 00:08:41.157 { 00:08:41.157 "name": null, 00:08:41.157 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:41.157 "is_configured": false, 00:08:41.157 "data_offset": 0, 00:08:41.157 "data_size": 65536 00:08:41.157 }, 00:08:41.157 { 00:08:41.157 "name": "BaseBdev3", 00:08:41.157 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:41.157 "is_configured": true, 00:08:41.157 "data_offset": 0, 00:08:41.157 "data_size": 65536 00:08:41.157 } 00:08:41.157 ] 00:08:41.157 }' 00:08:41.157 12:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.157 12:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.725 [2024-12-14 12:34:41.210448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.725 "name": "Existed_Raid", 00:08:41.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.725 "strip_size_kb": 64, 00:08:41.725 "state": "configuring", 00:08:41.725 "raid_level": "raid0", 00:08:41.725 "superblock": false, 00:08:41.725 "num_base_bdevs": 3, 00:08:41.725 "num_base_bdevs_discovered": 1, 00:08:41.725 "num_base_bdevs_operational": 3, 00:08:41.725 "base_bdevs_list": [ 00:08:41.725 { 00:08:41.725 "name": "BaseBdev1", 00:08:41.725 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:41.725 "is_configured": true, 00:08:41.725 "data_offset": 0, 00:08:41.725 "data_size": 65536 00:08:41.725 }, 00:08:41.725 { 00:08:41.725 "name": null, 00:08:41.725 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:41.725 "is_configured": false, 00:08:41.725 "data_offset": 0, 00:08:41.725 "data_size": 65536 00:08:41.725 }, 00:08:41.725 { 00:08:41.725 "name": null, 00:08:41.725 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:41.725 "is_configured": false, 00:08:41.725 "data_offset": 0, 00:08:41.725 "data_size": 65536 00:08:41.725 } 00:08:41.725 ] 00:08:41.725 }' 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.725 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.985 [2024-12-14 12:34:41.677663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.985 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.245 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.245 "name": "Existed_Raid", 00:08:42.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.245 "strip_size_kb": 64, 00:08:42.245 "state": "configuring", 00:08:42.245 "raid_level": "raid0", 00:08:42.245 "superblock": false, 00:08:42.245 "num_base_bdevs": 3, 00:08:42.245 "num_base_bdevs_discovered": 2, 00:08:42.245 "num_base_bdevs_operational": 3, 00:08:42.245 "base_bdevs_list": [ 00:08:42.245 { 00:08:42.245 "name": "BaseBdev1", 00:08:42.245 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:42.245 "is_configured": true, 00:08:42.245 "data_offset": 0, 00:08:42.245 "data_size": 65536 00:08:42.245 }, 00:08:42.245 { 00:08:42.245 "name": null, 00:08:42.245 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:42.245 "is_configured": false, 00:08:42.245 "data_offset": 0, 00:08:42.245 "data_size": 65536 00:08:42.245 }, 00:08:42.245 { 00:08:42.245 "name": "BaseBdev3", 00:08:42.245 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:42.245 "is_configured": true, 00:08:42.245 "data_offset": 0, 00:08:42.245 "data_size": 65536 00:08:42.245 } 00:08:42.245 ] 00:08:42.245 }' 00:08:42.245 12:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.245 12:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.505 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.505 [2024-12-14 12:34:42.208784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.765 "name": "Existed_Raid", 00:08:42.765 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.765 "strip_size_kb": 64, 00:08:42.765 "state": "configuring", 00:08:42.765 "raid_level": "raid0", 00:08:42.765 "superblock": false, 00:08:42.765 "num_base_bdevs": 3, 00:08:42.765 "num_base_bdevs_discovered": 1, 00:08:42.765 "num_base_bdevs_operational": 3, 00:08:42.765 "base_bdevs_list": [ 00:08:42.765 { 00:08:42.765 "name": null, 00:08:42.765 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:42.765 "is_configured": false, 00:08:42.765 "data_offset": 0, 00:08:42.765 "data_size": 65536 00:08:42.765 }, 00:08:42.765 { 00:08:42.765 "name": null, 00:08:42.765 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:42.765 "is_configured": false, 00:08:42.765 "data_offset": 0, 00:08:42.765 "data_size": 65536 00:08:42.765 }, 00:08:42.765 { 00:08:42.765 "name": "BaseBdev3", 00:08:42.765 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:42.765 "is_configured": true, 00:08:42.765 "data_offset": 0, 00:08:42.765 "data_size": 65536 00:08:42.765 } 00:08:42.765 ] 00:08:42.765 }' 00:08:42.765 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.766 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.025 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:43.025 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.025 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.025 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.025 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.284 [2024-12-14 12:34:42.781841] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.284 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.285 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.285 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.285 "name": "Existed_Raid", 00:08:43.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.285 "strip_size_kb": 64, 00:08:43.285 "state": "configuring", 00:08:43.285 "raid_level": "raid0", 00:08:43.285 "superblock": false, 00:08:43.285 "num_base_bdevs": 3, 00:08:43.285 "num_base_bdevs_discovered": 2, 00:08:43.285 "num_base_bdevs_operational": 3, 00:08:43.285 "base_bdevs_list": [ 00:08:43.285 { 00:08:43.285 "name": null, 00:08:43.285 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:43.285 "is_configured": false, 00:08:43.285 "data_offset": 0, 00:08:43.285 "data_size": 65536 00:08:43.285 }, 00:08:43.285 { 00:08:43.285 "name": "BaseBdev2", 00:08:43.285 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:43.285 "is_configured": true, 00:08:43.285 "data_offset": 0, 00:08:43.285 "data_size": 65536 00:08:43.285 }, 00:08:43.285 { 00:08:43.285 "name": "BaseBdev3", 00:08:43.285 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:43.285 "is_configured": true, 00:08:43.285 "data_offset": 0, 00:08:43.285 "data_size": 65536 00:08:43.285 } 00:08:43.285 ] 00:08:43.285 }' 00:08:43.285 12:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.285 12:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.544 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.544 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:43.544 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.544 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.803 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.803 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:43.803 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.803 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.803 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a13b4d53-6d6a-46a4-9348-d7d91166fce3 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.804 [2024-12-14 12:34:43.401706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:43.804 [2024-12-14 12:34:43.401751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:43.804 [2024-12-14 12:34:43.401760] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:43.804 [2024-12-14 12:34:43.401994] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:43.804 [2024-12-14 12:34:43.402195] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:43.804 [2024-12-14 12:34:43.402206] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:43.804 [2024-12-14 12:34:43.402482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.804 NewBaseBdev 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.804 [ 00:08:43.804 { 00:08:43.804 "name": "NewBaseBdev", 00:08:43.804 "aliases": [ 00:08:43.804 "a13b4d53-6d6a-46a4-9348-d7d91166fce3" 00:08:43.804 ], 00:08:43.804 "product_name": "Malloc disk", 00:08:43.804 "block_size": 512, 00:08:43.804 "num_blocks": 65536, 00:08:43.804 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:43.804 "assigned_rate_limits": { 00:08:43.804 "rw_ios_per_sec": 0, 00:08:43.804 "rw_mbytes_per_sec": 0, 00:08:43.804 "r_mbytes_per_sec": 0, 00:08:43.804 "w_mbytes_per_sec": 0 00:08:43.804 }, 00:08:43.804 "claimed": true, 00:08:43.804 "claim_type": "exclusive_write", 00:08:43.804 "zoned": false, 00:08:43.804 "supported_io_types": { 00:08:43.804 "read": true, 00:08:43.804 "write": true, 00:08:43.804 "unmap": true, 00:08:43.804 "flush": true, 00:08:43.804 "reset": true, 00:08:43.804 "nvme_admin": false, 00:08:43.804 "nvme_io": false, 00:08:43.804 "nvme_io_md": false, 00:08:43.804 "write_zeroes": true, 00:08:43.804 "zcopy": true, 00:08:43.804 "get_zone_info": false, 00:08:43.804 "zone_management": false, 00:08:43.804 "zone_append": false, 00:08:43.804 "compare": false, 00:08:43.804 "compare_and_write": false, 00:08:43.804 "abort": true, 00:08:43.804 "seek_hole": false, 00:08:43.804 "seek_data": false, 00:08:43.804 "copy": true, 00:08:43.804 "nvme_iov_md": false 00:08:43.804 }, 00:08:43.804 "memory_domains": [ 00:08:43.804 { 00:08:43.804 "dma_device_id": "system", 00:08:43.804 "dma_device_type": 1 00:08:43.804 }, 00:08:43.804 { 00:08:43.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.804 "dma_device_type": 2 00:08:43.804 } 00:08:43.804 ], 00:08:43.804 "driver_specific": {} 00:08:43.804 } 00:08:43.804 ] 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.804 "name": "Existed_Raid", 00:08:43.804 "uuid": "4ad6c346-29ad-4c78-a57e-a6a495f91990", 00:08:43.804 "strip_size_kb": 64, 00:08:43.804 "state": "online", 00:08:43.804 "raid_level": "raid0", 00:08:43.804 "superblock": false, 00:08:43.804 "num_base_bdevs": 3, 00:08:43.804 "num_base_bdevs_discovered": 3, 00:08:43.804 "num_base_bdevs_operational": 3, 00:08:43.804 "base_bdevs_list": [ 00:08:43.804 { 00:08:43.804 "name": "NewBaseBdev", 00:08:43.804 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:43.804 "is_configured": true, 00:08:43.804 "data_offset": 0, 00:08:43.804 "data_size": 65536 00:08:43.804 }, 00:08:43.804 { 00:08:43.804 "name": "BaseBdev2", 00:08:43.804 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:43.804 "is_configured": true, 00:08:43.804 "data_offset": 0, 00:08:43.804 "data_size": 65536 00:08:43.804 }, 00:08:43.804 { 00:08:43.804 "name": "BaseBdev3", 00:08:43.804 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:43.804 "is_configured": true, 00:08:43.804 "data_offset": 0, 00:08:43.804 "data_size": 65536 00:08:43.804 } 00:08:43.804 ] 00:08:43.804 }' 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.804 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.373 [2024-12-14 12:34:43.913279] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.373 "name": "Existed_Raid", 00:08:44.373 "aliases": [ 00:08:44.373 "4ad6c346-29ad-4c78-a57e-a6a495f91990" 00:08:44.373 ], 00:08:44.373 "product_name": "Raid Volume", 00:08:44.373 "block_size": 512, 00:08:44.373 "num_blocks": 196608, 00:08:44.373 "uuid": "4ad6c346-29ad-4c78-a57e-a6a495f91990", 00:08:44.373 "assigned_rate_limits": { 00:08:44.373 "rw_ios_per_sec": 0, 00:08:44.373 "rw_mbytes_per_sec": 0, 00:08:44.373 "r_mbytes_per_sec": 0, 00:08:44.373 "w_mbytes_per_sec": 0 00:08:44.373 }, 00:08:44.373 "claimed": false, 00:08:44.373 "zoned": false, 00:08:44.373 "supported_io_types": { 00:08:44.373 "read": true, 00:08:44.373 "write": true, 00:08:44.373 "unmap": true, 00:08:44.373 "flush": true, 00:08:44.373 "reset": true, 00:08:44.373 "nvme_admin": false, 00:08:44.373 "nvme_io": false, 00:08:44.373 "nvme_io_md": false, 00:08:44.373 "write_zeroes": true, 00:08:44.373 "zcopy": false, 00:08:44.373 "get_zone_info": false, 00:08:44.373 "zone_management": false, 00:08:44.373 "zone_append": false, 00:08:44.373 "compare": false, 00:08:44.373 "compare_and_write": false, 00:08:44.373 "abort": false, 00:08:44.373 "seek_hole": false, 00:08:44.373 "seek_data": false, 00:08:44.373 "copy": false, 00:08:44.373 "nvme_iov_md": false 00:08:44.373 }, 00:08:44.373 "memory_domains": [ 00:08:44.373 { 00:08:44.373 "dma_device_id": "system", 00:08:44.373 "dma_device_type": 1 00:08:44.373 }, 00:08:44.373 { 00:08:44.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.373 "dma_device_type": 2 00:08:44.373 }, 00:08:44.373 { 00:08:44.373 "dma_device_id": "system", 00:08:44.373 "dma_device_type": 1 00:08:44.373 }, 00:08:44.373 { 00:08:44.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.373 "dma_device_type": 2 00:08:44.373 }, 00:08:44.373 { 00:08:44.373 "dma_device_id": "system", 00:08:44.373 "dma_device_type": 1 00:08:44.373 }, 00:08:44.373 { 00:08:44.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.373 "dma_device_type": 2 00:08:44.373 } 00:08:44.373 ], 00:08:44.373 "driver_specific": { 00:08:44.373 "raid": { 00:08:44.373 "uuid": "4ad6c346-29ad-4c78-a57e-a6a495f91990", 00:08:44.373 "strip_size_kb": 64, 00:08:44.373 "state": "online", 00:08:44.373 "raid_level": "raid0", 00:08:44.373 "superblock": false, 00:08:44.373 "num_base_bdevs": 3, 00:08:44.373 "num_base_bdevs_discovered": 3, 00:08:44.373 "num_base_bdevs_operational": 3, 00:08:44.373 "base_bdevs_list": [ 00:08:44.373 { 00:08:44.373 "name": "NewBaseBdev", 00:08:44.373 "uuid": "a13b4d53-6d6a-46a4-9348-d7d91166fce3", 00:08:44.373 "is_configured": true, 00:08:44.373 "data_offset": 0, 00:08:44.373 "data_size": 65536 00:08:44.373 }, 00:08:44.373 { 00:08:44.373 "name": "BaseBdev2", 00:08:44.373 "uuid": "786f6f11-3a3b-492d-a867-84018e8ca806", 00:08:44.373 "is_configured": true, 00:08:44.373 "data_offset": 0, 00:08:44.373 "data_size": 65536 00:08:44.373 }, 00:08:44.373 { 00:08:44.373 "name": "BaseBdev3", 00:08:44.373 "uuid": "b40a38cd-07ab-4d01-9992-c563e2f8d737", 00:08:44.373 "is_configured": true, 00:08:44.373 "data_offset": 0, 00:08:44.373 "data_size": 65536 00:08:44.373 } 00:08:44.373 ] 00:08:44.373 } 00:08:44.373 } 00:08:44.373 }' 00:08:44.373 12:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.373 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:44.373 BaseBdev2 00:08:44.373 BaseBdev3' 00:08:44.373 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.374 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.633 [2024-12-14 12:34:44.212409] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:44.633 [2024-12-14 12:34:44.212478] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.633 [2024-12-14 12:34:44.212585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.633 [2024-12-14 12:34:44.212671] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.633 [2024-12-14 12:34:44.212724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65618 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65618 ']' 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65618 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65618 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.633 killing process with pid 65618 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65618' 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65618 00:08:44.633 [2024-12-14 12:34:44.261395] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.633 12:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65618 00:08:44.892 [2024-12-14 12:34:44.560115] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.331 12:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:46.331 00:08:46.331 real 0m10.694s 00:08:46.331 user 0m17.040s 00:08:46.331 sys 0m1.883s 00:08:46.331 12:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.331 12:34:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.331 ************************************ 00:08:46.331 END TEST raid_state_function_test 00:08:46.331 ************************************ 00:08:46.331 12:34:45 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:46.331 12:34:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.331 12:34:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.331 12:34:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.331 ************************************ 00:08:46.332 START TEST raid_state_function_test_sb 00:08:46.332 ************************************ 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:46.332 Process raid pid: 66240 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66240 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66240' 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66240 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66240 ']' 00:08:46.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.332 12:34:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.332 [2024-12-14 12:34:45.862823] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:46.332 [2024-12-14 12:34:45.862996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.332 [2024-12-14 12:34:46.039455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.591 [2024-12-14 12:34:46.153982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.851 [2024-12-14 12:34:46.358457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.851 [2024-12-14 12:34:46.358500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 [2024-12-14 12:34:46.672297] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.110 [2024-12-14 12:34:46.672349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.110 [2024-12-14 12:34:46.672366] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.110 [2024-12-14 12:34:46.672377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.110 [2024-12-14 12:34:46.672387] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.110 [2024-12-14 12:34:46.672396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.110 "name": "Existed_Raid", 00:08:47.110 "uuid": "5a0a8d1a-7092-45c8-a7fd-bc18b696a4f4", 00:08:47.110 "strip_size_kb": 64, 00:08:47.110 "state": "configuring", 00:08:47.110 "raid_level": "raid0", 00:08:47.110 "superblock": true, 00:08:47.110 "num_base_bdevs": 3, 00:08:47.110 "num_base_bdevs_discovered": 0, 00:08:47.110 "num_base_bdevs_operational": 3, 00:08:47.110 "base_bdevs_list": [ 00:08:47.110 { 00:08:47.110 "name": "BaseBdev1", 00:08:47.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.110 "is_configured": false, 00:08:47.110 "data_offset": 0, 00:08:47.110 "data_size": 0 00:08:47.110 }, 00:08:47.110 { 00:08:47.110 "name": "BaseBdev2", 00:08:47.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.110 "is_configured": false, 00:08:47.110 "data_offset": 0, 00:08:47.110 "data_size": 0 00:08:47.110 }, 00:08:47.110 { 00:08:47.110 "name": "BaseBdev3", 00:08:47.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.110 "is_configured": false, 00:08:47.110 "data_offset": 0, 00:08:47.110 "data_size": 0 00:08:47.110 } 00:08:47.110 ] 00:08:47.110 }' 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.110 12:34:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 [2024-12-14 12:34:47.127486] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.680 [2024-12-14 12:34:47.127578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 [2024-12-14 12:34:47.139463] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.680 [2024-12-14 12:34:47.139542] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.680 [2024-12-14 12:34:47.139569] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.680 [2024-12-14 12:34:47.139591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.680 [2024-12-14 12:34:47.139608] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.680 [2024-12-14 12:34:47.139628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 [2024-12-14 12:34:47.183673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.680 BaseBdev1 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.680 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.680 [ 00:08:47.680 { 00:08:47.680 "name": "BaseBdev1", 00:08:47.680 "aliases": [ 00:08:47.680 "27039607-2f20-4fb4-a273-b0ac9778c57e" 00:08:47.680 ], 00:08:47.680 "product_name": "Malloc disk", 00:08:47.680 "block_size": 512, 00:08:47.680 "num_blocks": 65536, 00:08:47.680 "uuid": "27039607-2f20-4fb4-a273-b0ac9778c57e", 00:08:47.680 "assigned_rate_limits": { 00:08:47.680 "rw_ios_per_sec": 0, 00:08:47.680 "rw_mbytes_per_sec": 0, 00:08:47.680 "r_mbytes_per_sec": 0, 00:08:47.680 "w_mbytes_per_sec": 0 00:08:47.680 }, 00:08:47.680 "claimed": true, 00:08:47.680 "claim_type": "exclusive_write", 00:08:47.680 "zoned": false, 00:08:47.680 "supported_io_types": { 00:08:47.680 "read": true, 00:08:47.680 "write": true, 00:08:47.680 "unmap": true, 00:08:47.680 "flush": true, 00:08:47.680 "reset": true, 00:08:47.680 "nvme_admin": false, 00:08:47.680 "nvme_io": false, 00:08:47.680 "nvme_io_md": false, 00:08:47.680 "write_zeroes": true, 00:08:47.680 "zcopy": true, 00:08:47.680 "get_zone_info": false, 00:08:47.680 "zone_management": false, 00:08:47.680 "zone_append": false, 00:08:47.680 "compare": false, 00:08:47.680 "compare_and_write": false, 00:08:47.681 "abort": true, 00:08:47.681 "seek_hole": false, 00:08:47.681 "seek_data": false, 00:08:47.681 "copy": true, 00:08:47.681 "nvme_iov_md": false 00:08:47.681 }, 00:08:47.681 "memory_domains": [ 00:08:47.681 { 00:08:47.681 "dma_device_id": "system", 00:08:47.681 "dma_device_type": 1 00:08:47.681 }, 00:08:47.681 { 00:08:47.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.681 "dma_device_type": 2 00:08:47.681 } 00:08:47.681 ], 00:08:47.681 "driver_specific": {} 00:08:47.681 } 00:08:47.681 ] 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.681 "name": "Existed_Raid", 00:08:47.681 "uuid": "4af9c9de-0979-4939-a76b-043db11780f4", 00:08:47.681 "strip_size_kb": 64, 00:08:47.681 "state": "configuring", 00:08:47.681 "raid_level": "raid0", 00:08:47.681 "superblock": true, 00:08:47.681 "num_base_bdevs": 3, 00:08:47.681 "num_base_bdevs_discovered": 1, 00:08:47.681 "num_base_bdevs_operational": 3, 00:08:47.681 "base_bdevs_list": [ 00:08:47.681 { 00:08:47.681 "name": "BaseBdev1", 00:08:47.681 "uuid": "27039607-2f20-4fb4-a273-b0ac9778c57e", 00:08:47.681 "is_configured": true, 00:08:47.681 "data_offset": 2048, 00:08:47.681 "data_size": 63488 00:08:47.681 }, 00:08:47.681 { 00:08:47.681 "name": "BaseBdev2", 00:08:47.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.681 "is_configured": false, 00:08:47.681 "data_offset": 0, 00:08:47.681 "data_size": 0 00:08:47.681 }, 00:08:47.681 { 00:08:47.681 "name": "BaseBdev3", 00:08:47.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.681 "is_configured": false, 00:08:47.681 "data_offset": 0, 00:08:47.681 "data_size": 0 00:08:47.681 } 00:08:47.681 ] 00:08:47.681 }' 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.681 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.250 [2024-12-14 12:34:47.698869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:48.250 [2024-12-14 12:34:47.698925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.250 [2024-12-14 12:34:47.706899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:48.250 [2024-12-14 12:34:47.708732] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:48.250 [2024-12-14 12:34:47.708776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:48.250 [2024-12-14 12:34:47.708785] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:48.250 [2024-12-14 12:34:47.708795] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.250 "name": "Existed_Raid", 00:08:48.250 "uuid": "21fa2fb8-1e7e-493d-9ff5-d8a7e906fabe", 00:08:48.250 "strip_size_kb": 64, 00:08:48.250 "state": "configuring", 00:08:48.250 "raid_level": "raid0", 00:08:48.250 "superblock": true, 00:08:48.250 "num_base_bdevs": 3, 00:08:48.250 "num_base_bdevs_discovered": 1, 00:08:48.250 "num_base_bdevs_operational": 3, 00:08:48.250 "base_bdevs_list": [ 00:08:48.250 { 00:08:48.250 "name": "BaseBdev1", 00:08:48.250 "uuid": "27039607-2f20-4fb4-a273-b0ac9778c57e", 00:08:48.250 "is_configured": true, 00:08:48.250 "data_offset": 2048, 00:08:48.250 "data_size": 63488 00:08:48.250 }, 00:08:48.250 { 00:08:48.250 "name": "BaseBdev2", 00:08:48.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.250 "is_configured": false, 00:08:48.250 "data_offset": 0, 00:08:48.250 "data_size": 0 00:08:48.250 }, 00:08:48.250 { 00:08:48.250 "name": "BaseBdev3", 00:08:48.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.250 "is_configured": false, 00:08:48.250 "data_offset": 0, 00:08:48.250 "data_size": 0 00:08:48.250 } 00:08:48.250 ] 00:08:48.250 }' 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.250 12:34:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.511 [2024-12-14 12:34:48.127857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.511 BaseBdev2 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.511 [ 00:08:48.511 { 00:08:48.511 "name": "BaseBdev2", 00:08:48.511 "aliases": [ 00:08:48.511 "eb9650b5-4783-42a2-a52b-3afe1bf60a7f" 00:08:48.511 ], 00:08:48.511 "product_name": "Malloc disk", 00:08:48.511 "block_size": 512, 00:08:48.511 "num_blocks": 65536, 00:08:48.511 "uuid": "eb9650b5-4783-42a2-a52b-3afe1bf60a7f", 00:08:48.511 "assigned_rate_limits": { 00:08:48.511 "rw_ios_per_sec": 0, 00:08:48.511 "rw_mbytes_per_sec": 0, 00:08:48.511 "r_mbytes_per_sec": 0, 00:08:48.511 "w_mbytes_per_sec": 0 00:08:48.511 }, 00:08:48.511 "claimed": true, 00:08:48.511 "claim_type": "exclusive_write", 00:08:48.511 "zoned": false, 00:08:48.511 "supported_io_types": { 00:08:48.511 "read": true, 00:08:48.511 "write": true, 00:08:48.511 "unmap": true, 00:08:48.511 "flush": true, 00:08:48.511 "reset": true, 00:08:48.511 "nvme_admin": false, 00:08:48.511 "nvme_io": false, 00:08:48.511 "nvme_io_md": false, 00:08:48.511 "write_zeroes": true, 00:08:48.511 "zcopy": true, 00:08:48.511 "get_zone_info": false, 00:08:48.511 "zone_management": false, 00:08:48.511 "zone_append": false, 00:08:48.511 "compare": false, 00:08:48.511 "compare_and_write": false, 00:08:48.511 "abort": true, 00:08:48.511 "seek_hole": false, 00:08:48.511 "seek_data": false, 00:08:48.511 "copy": true, 00:08:48.511 "nvme_iov_md": false 00:08:48.511 }, 00:08:48.511 "memory_domains": [ 00:08:48.511 { 00:08:48.511 "dma_device_id": "system", 00:08:48.511 "dma_device_type": 1 00:08:48.511 }, 00:08:48.511 { 00:08:48.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.511 "dma_device_type": 2 00:08:48.511 } 00:08:48.511 ], 00:08:48.511 "driver_specific": {} 00:08:48.511 } 00:08:48.511 ] 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.511 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.511 "name": "Existed_Raid", 00:08:48.511 "uuid": "21fa2fb8-1e7e-493d-9ff5-d8a7e906fabe", 00:08:48.511 "strip_size_kb": 64, 00:08:48.511 "state": "configuring", 00:08:48.511 "raid_level": "raid0", 00:08:48.511 "superblock": true, 00:08:48.511 "num_base_bdevs": 3, 00:08:48.511 "num_base_bdevs_discovered": 2, 00:08:48.511 "num_base_bdevs_operational": 3, 00:08:48.511 "base_bdevs_list": [ 00:08:48.511 { 00:08:48.511 "name": "BaseBdev1", 00:08:48.511 "uuid": "27039607-2f20-4fb4-a273-b0ac9778c57e", 00:08:48.511 "is_configured": true, 00:08:48.511 "data_offset": 2048, 00:08:48.511 "data_size": 63488 00:08:48.511 }, 00:08:48.511 { 00:08:48.511 "name": "BaseBdev2", 00:08:48.511 "uuid": "eb9650b5-4783-42a2-a52b-3afe1bf60a7f", 00:08:48.511 "is_configured": true, 00:08:48.512 "data_offset": 2048, 00:08:48.512 "data_size": 63488 00:08:48.512 }, 00:08:48.512 { 00:08:48.512 "name": "BaseBdev3", 00:08:48.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.512 "is_configured": false, 00:08:48.512 "data_offset": 0, 00:08:48.512 "data_size": 0 00:08:48.512 } 00:08:48.512 ] 00:08:48.512 }' 00:08:48.512 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.512 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.079 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:49.079 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.079 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.080 [2024-12-14 12:34:48.655307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:49.080 [2024-12-14 12:34:48.655738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.080 [2024-12-14 12:34:48.655798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:49.080 [2024-12-14 12:34:48.656116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:49.080 [2024-12-14 12:34:48.656319] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.080 BaseBdev3 00:08:49.080 [2024-12-14 12:34:48.656383] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:49.080 [2024-12-14 12:34:48.656580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.080 [ 00:08:49.080 { 00:08:49.080 "name": "BaseBdev3", 00:08:49.080 "aliases": [ 00:08:49.080 "23d896cb-33ee-4616-bd0e-ce00dd6800e6" 00:08:49.080 ], 00:08:49.080 "product_name": "Malloc disk", 00:08:49.080 "block_size": 512, 00:08:49.080 "num_blocks": 65536, 00:08:49.080 "uuid": "23d896cb-33ee-4616-bd0e-ce00dd6800e6", 00:08:49.080 "assigned_rate_limits": { 00:08:49.080 "rw_ios_per_sec": 0, 00:08:49.080 "rw_mbytes_per_sec": 0, 00:08:49.080 "r_mbytes_per_sec": 0, 00:08:49.080 "w_mbytes_per_sec": 0 00:08:49.080 }, 00:08:49.080 "claimed": true, 00:08:49.080 "claim_type": "exclusive_write", 00:08:49.080 "zoned": false, 00:08:49.080 "supported_io_types": { 00:08:49.080 "read": true, 00:08:49.080 "write": true, 00:08:49.080 "unmap": true, 00:08:49.080 "flush": true, 00:08:49.080 "reset": true, 00:08:49.080 "nvme_admin": false, 00:08:49.080 "nvme_io": false, 00:08:49.080 "nvme_io_md": false, 00:08:49.080 "write_zeroes": true, 00:08:49.080 "zcopy": true, 00:08:49.080 "get_zone_info": false, 00:08:49.080 "zone_management": false, 00:08:49.080 "zone_append": false, 00:08:49.080 "compare": false, 00:08:49.080 "compare_and_write": false, 00:08:49.080 "abort": true, 00:08:49.080 "seek_hole": false, 00:08:49.080 "seek_data": false, 00:08:49.080 "copy": true, 00:08:49.080 "nvme_iov_md": false 00:08:49.080 }, 00:08:49.080 "memory_domains": [ 00:08:49.080 { 00:08:49.080 "dma_device_id": "system", 00:08:49.080 "dma_device_type": 1 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.080 "dma_device_type": 2 00:08:49.080 } 00:08:49.080 ], 00:08:49.080 "driver_specific": {} 00:08:49.080 } 00:08:49.080 ] 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.080 "name": "Existed_Raid", 00:08:49.080 "uuid": "21fa2fb8-1e7e-493d-9ff5-d8a7e906fabe", 00:08:49.080 "strip_size_kb": 64, 00:08:49.080 "state": "online", 00:08:49.080 "raid_level": "raid0", 00:08:49.080 "superblock": true, 00:08:49.080 "num_base_bdevs": 3, 00:08:49.080 "num_base_bdevs_discovered": 3, 00:08:49.080 "num_base_bdevs_operational": 3, 00:08:49.080 "base_bdevs_list": [ 00:08:49.080 { 00:08:49.080 "name": "BaseBdev1", 00:08:49.080 "uuid": "27039607-2f20-4fb4-a273-b0ac9778c57e", 00:08:49.080 "is_configured": true, 00:08:49.080 "data_offset": 2048, 00:08:49.080 "data_size": 63488 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "name": "BaseBdev2", 00:08:49.080 "uuid": "eb9650b5-4783-42a2-a52b-3afe1bf60a7f", 00:08:49.080 "is_configured": true, 00:08:49.080 "data_offset": 2048, 00:08:49.080 "data_size": 63488 00:08:49.080 }, 00:08:49.080 { 00:08:49.080 "name": "BaseBdev3", 00:08:49.080 "uuid": "23d896cb-33ee-4616-bd0e-ce00dd6800e6", 00:08:49.080 "is_configured": true, 00:08:49.080 "data_offset": 2048, 00:08:49.080 "data_size": 63488 00:08:49.080 } 00:08:49.080 ] 00:08:49.080 }' 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.080 12:34:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.649 [2024-12-14 12:34:49.194776] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.649 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.649 "name": "Existed_Raid", 00:08:49.649 "aliases": [ 00:08:49.649 "21fa2fb8-1e7e-493d-9ff5-d8a7e906fabe" 00:08:49.649 ], 00:08:49.649 "product_name": "Raid Volume", 00:08:49.649 "block_size": 512, 00:08:49.649 "num_blocks": 190464, 00:08:49.649 "uuid": "21fa2fb8-1e7e-493d-9ff5-d8a7e906fabe", 00:08:49.649 "assigned_rate_limits": { 00:08:49.649 "rw_ios_per_sec": 0, 00:08:49.649 "rw_mbytes_per_sec": 0, 00:08:49.650 "r_mbytes_per_sec": 0, 00:08:49.650 "w_mbytes_per_sec": 0 00:08:49.650 }, 00:08:49.650 "claimed": false, 00:08:49.650 "zoned": false, 00:08:49.650 "supported_io_types": { 00:08:49.650 "read": true, 00:08:49.650 "write": true, 00:08:49.650 "unmap": true, 00:08:49.650 "flush": true, 00:08:49.650 "reset": true, 00:08:49.650 "nvme_admin": false, 00:08:49.650 "nvme_io": false, 00:08:49.650 "nvme_io_md": false, 00:08:49.650 "write_zeroes": true, 00:08:49.650 "zcopy": false, 00:08:49.650 "get_zone_info": false, 00:08:49.650 "zone_management": false, 00:08:49.650 "zone_append": false, 00:08:49.650 "compare": false, 00:08:49.650 "compare_and_write": false, 00:08:49.650 "abort": false, 00:08:49.650 "seek_hole": false, 00:08:49.650 "seek_data": false, 00:08:49.650 "copy": false, 00:08:49.650 "nvme_iov_md": false 00:08:49.650 }, 00:08:49.650 "memory_domains": [ 00:08:49.650 { 00:08:49.650 "dma_device_id": "system", 00:08:49.650 "dma_device_type": 1 00:08:49.650 }, 00:08:49.650 { 00:08:49.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.650 "dma_device_type": 2 00:08:49.650 }, 00:08:49.650 { 00:08:49.650 "dma_device_id": "system", 00:08:49.650 "dma_device_type": 1 00:08:49.650 }, 00:08:49.650 { 00:08:49.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.650 "dma_device_type": 2 00:08:49.650 }, 00:08:49.650 { 00:08:49.650 "dma_device_id": "system", 00:08:49.650 "dma_device_type": 1 00:08:49.650 }, 00:08:49.650 { 00:08:49.650 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.650 "dma_device_type": 2 00:08:49.650 } 00:08:49.650 ], 00:08:49.650 "driver_specific": { 00:08:49.650 "raid": { 00:08:49.650 "uuid": "21fa2fb8-1e7e-493d-9ff5-d8a7e906fabe", 00:08:49.650 "strip_size_kb": 64, 00:08:49.650 "state": "online", 00:08:49.650 "raid_level": "raid0", 00:08:49.650 "superblock": true, 00:08:49.650 "num_base_bdevs": 3, 00:08:49.650 "num_base_bdevs_discovered": 3, 00:08:49.650 "num_base_bdevs_operational": 3, 00:08:49.650 "base_bdevs_list": [ 00:08:49.650 { 00:08:49.650 "name": "BaseBdev1", 00:08:49.650 "uuid": "27039607-2f20-4fb4-a273-b0ac9778c57e", 00:08:49.650 "is_configured": true, 00:08:49.650 "data_offset": 2048, 00:08:49.650 "data_size": 63488 00:08:49.650 }, 00:08:49.650 { 00:08:49.650 "name": "BaseBdev2", 00:08:49.650 "uuid": "eb9650b5-4783-42a2-a52b-3afe1bf60a7f", 00:08:49.650 "is_configured": true, 00:08:49.650 "data_offset": 2048, 00:08:49.650 "data_size": 63488 00:08:49.650 }, 00:08:49.650 { 00:08:49.650 "name": "BaseBdev3", 00:08:49.650 "uuid": "23d896cb-33ee-4616-bd0e-ce00dd6800e6", 00:08:49.650 "is_configured": true, 00:08:49.650 "data_offset": 2048, 00:08:49.650 "data_size": 63488 00:08:49.650 } 00:08:49.650 ] 00:08:49.650 } 00:08:49.650 } 00:08:49.650 }' 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.650 BaseBdev2 00:08:49.650 BaseBdev3' 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.650 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.909 [2024-12-14 12:34:49.438145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.909 [2024-12-14 12:34:49.438173] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.909 [2024-12-14 12:34:49.438224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.909 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.910 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.910 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.910 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.910 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.910 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.910 "name": "Existed_Raid", 00:08:49.910 "uuid": "21fa2fb8-1e7e-493d-9ff5-d8a7e906fabe", 00:08:49.910 "strip_size_kb": 64, 00:08:49.910 "state": "offline", 00:08:49.910 "raid_level": "raid0", 00:08:49.910 "superblock": true, 00:08:49.910 "num_base_bdevs": 3, 00:08:49.910 "num_base_bdevs_discovered": 2, 00:08:49.910 "num_base_bdevs_operational": 2, 00:08:49.910 "base_bdevs_list": [ 00:08:49.910 { 00:08:49.910 "name": null, 00:08:49.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.910 "is_configured": false, 00:08:49.910 "data_offset": 0, 00:08:49.910 "data_size": 63488 00:08:49.910 }, 00:08:49.910 { 00:08:49.910 "name": "BaseBdev2", 00:08:49.910 "uuid": "eb9650b5-4783-42a2-a52b-3afe1bf60a7f", 00:08:49.910 "is_configured": true, 00:08:49.910 "data_offset": 2048, 00:08:49.910 "data_size": 63488 00:08:49.910 }, 00:08:49.910 { 00:08:49.910 "name": "BaseBdev3", 00:08:49.910 "uuid": "23d896cb-33ee-4616-bd0e-ce00dd6800e6", 00:08:49.910 "is_configured": true, 00:08:49.910 "data_offset": 2048, 00:08:49.910 "data_size": 63488 00:08:49.910 } 00:08:49.910 ] 00:08:49.910 }' 00:08:49.910 12:34:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.910 12:34:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 [2024-12-14 12:34:50.053140] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.479 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.479 [2024-12-14 12:34:50.208077] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.479 [2024-12-14 12:34:50.208127] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.739 BaseBdev2 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.739 [ 00:08:50.739 { 00:08:50.739 "name": "BaseBdev2", 00:08:50.739 "aliases": [ 00:08:50.739 "4529d7f9-742a-4db7-ab6e-7bc651680134" 00:08:50.739 ], 00:08:50.739 "product_name": "Malloc disk", 00:08:50.739 "block_size": 512, 00:08:50.739 "num_blocks": 65536, 00:08:50.739 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:50.739 "assigned_rate_limits": { 00:08:50.739 "rw_ios_per_sec": 0, 00:08:50.739 "rw_mbytes_per_sec": 0, 00:08:50.739 "r_mbytes_per_sec": 0, 00:08:50.739 "w_mbytes_per_sec": 0 00:08:50.739 }, 00:08:50.739 "claimed": false, 00:08:50.739 "zoned": false, 00:08:50.739 "supported_io_types": { 00:08:50.739 "read": true, 00:08:50.739 "write": true, 00:08:50.739 "unmap": true, 00:08:50.739 "flush": true, 00:08:50.739 "reset": true, 00:08:50.739 "nvme_admin": false, 00:08:50.739 "nvme_io": false, 00:08:50.739 "nvme_io_md": false, 00:08:50.739 "write_zeroes": true, 00:08:50.739 "zcopy": true, 00:08:50.739 "get_zone_info": false, 00:08:50.739 "zone_management": false, 00:08:50.739 "zone_append": false, 00:08:50.739 "compare": false, 00:08:50.739 "compare_and_write": false, 00:08:50.739 "abort": true, 00:08:50.739 "seek_hole": false, 00:08:50.739 "seek_data": false, 00:08:50.739 "copy": true, 00:08:50.739 "nvme_iov_md": false 00:08:50.739 }, 00:08:50.739 "memory_domains": [ 00:08:50.739 { 00:08:50.739 "dma_device_id": "system", 00:08:50.739 "dma_device_type": 1 00:08:50.739 }, 00:08:50.739 { 00:08:50.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.739 "dma_device_type": 2 00:08:50.739 } 00:08:50.739 ], 00:08:50.739 "driver_specific": {} 00:08:50.739 } 00:08:50.739 ] 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.739 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.999 BaseBdev3 00:08:50.999 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.999 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:50.999 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:50.999 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.999 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:50.999 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.000 [ 00:08:51.000 { 00:08:51.000 "name": "BaseBdev3", 00:08:51.000 "aliases": [ 00:08:51.000 "87ea47bc-7dbc-41af-9a0f-be826844a0d6" 00:08:51.000 ], 00:08:51.000 "product_name": "Malloc disk", 00:08:51.000 "block_size": 512, 00:08:51.000 "num_blocks": 65536, 00:08:51.000 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:51.000 "assigned_rate_limits": { 00:08:51.000 "rw_ios_per_sec": 0, 00:08:51.000 "rw_mbytes_per_sec": 0, 00:08:51.000 "r_mbytes_per_sec": 0, 00:08:51.000 "w_mbytes_per_sec": 0 00:08:51.000 }, 00:08:51.000 "claimed": false, 00:08:51.000 "zoned": false, 00:08:51.000 "supported_io_types": { 00:08:51.000 "read": true, 00:08:51.000 "write": true, 00:08:51.000 "unmap": true, 00:08:51.000 "flush": true, 00:08:51.000 "reset": true, 00:08:51.000 "nvme_admin": false, 00:08:51.000 "nvme_io": false, 00:08:51.000 "nvme_io_md": false, 00:08:51.000 "write_zeroes": true, 00:08:51.000 "zcopy": true, 00:08:51.000 "get_zone_info": false, 00:08:51.000 "zone_management": false, 00:08:51.000 "zone_append": false, 00:08:51.000 "compare": false, 00:08:51.000 "compare_and_write": false, 00:08:51.000 "abort": true, 00:08:51.000 "seek_hole": false, 00:08:51.000 "seek_data": false, 00:08:51.000 "copy": true, 00:08:51.000 "nvme_iov_md": false 00:08:51.000 }, 00:08:51.000 "memory_domains": [ 00:08:51.000 { 00:08:51.000 "dma_device_id": "system", 00:08:51.000 "dma_device_type": 1 00:08:51.000 }, 00:08:51.000 { 00:08:51.000 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.000 "dma_device_type": 2 00:08:51.000 } 00:08:51.000 ], 00:08:51.000 "driver_specific": {} 00:08:51.000 } 00:08:51.000 ] 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.000 [2024-12-14 12:34:50.522472] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:51.000 [2024-12-14 12:34:50.522558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:51.000 [2024-12-14 12:34:50.522601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.000 [2024-12-14 12:34:50.524365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.000 "name": "Existed_Raid", 00:08:51.000 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:51.000 "strip_size_kb": 64, 00:08:51.000 "state": "configuring", 00:08:51.000 "raid_level": "raid0", 00:08:51.000 "superblock": true, 00:08:51.000 "num_base_bdevs": 3, 00:08:51.000 "num_base_bdevs_discovered": 2, 00:08:51.000 "num_base_bdevs_operational": 3, 00:08:51.000 "base_bdevs_list": [ 00:08:51.000 { 00:08:51.000 "name": "BaseBdev1", 00:08:51.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.000 "is_configured": false, 00:08:51.000 "data_offset": 0, 00:08:51.000 "data_size": 0 00:08:51.000 }, 00:08:51.000 { 00:08:51.000 "name": "BaseBdev2", 00:08:51.000 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:51.000 "is_configured": true, 00:08:51.000 "data_offset": 2048, 00:08:51.000 "data_size": 63488 00:08:51.000 }, 00:08:51.000 { 00:08:51.000 "name": "BaseBdev3", 00:08:51.000 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:51.000 "is_configured": true, 00:08:51.000 "data_offset": 2048, 00:08:51.000 "data_size": 63488 00:08:51.000 } 00:08:51.000 ] 00:08:51.000 }' 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.000 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.260 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:51.260 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.260 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.260 [2024-12-14 12:34:50.993751] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.520 12:34:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.520 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.520 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.520 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.520 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.520 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.520 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.520 "name": "Existed_Raid", 00:08:51.520 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:51.520 "strip_size_kb": 64, 00:08:51.520 "state": "configuring", 00:08:51.520 "raid_level": "raid0", 00:08:51.520 "superblock": true, 00:08:51.520 "num_base_bdevs": 3, 00:08:51.520 "num_base_bdevs_discovered": 1, 00:08:51.520 "num_base_bdevs_operational": 3, 00:08:51.520 "base_bdevs_list": [ 00:08:51.520 { 00:08:51.520 "name": "BaseBdev1", 00:08:51.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.520 "is_configured": false, 00:08:51.520 "data_offset": 0, 00:08:51.520 "data_size": 0 00:08:51.520 }, 00:08:51.520 { 00:08:51.520 "name": null, 00:08:51.520 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:51.520 "is_configured": false, 00:08:51.520 "data_offset": 0, 00:08:51.520 "data_size": 63488 00:08:51.520 }, 00:08:51.520 { 00:08:51.520 "name": "BaseBdev3", 00:08:51.520 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:51.520 "is_configured": true, 00:08:51.520 "data_offset": 2048, 00:08:51.520 "data_size": 63488 00:08:51.520 } 00:08:51.520 ] 00:08:51.520 }' 00:08:51.520 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.520 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.780 [2024-12-14 12:34:51.509774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.780 BaseBdev1 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.780 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.039 [ 00:08:52.039 { 00:08:52.039 "name": "BaseBdev1", 00:08:52.039 "aliases": [ 00:08:52.039 "a7055085-816e-488c-b812-938e0bdcb62e" 00:08:52.039 ], 00:08:52.039 "product_name": "Malloc disk", 00:08:52.039 "block_size": 512, 00:08:52.039 "num_blocks": 65536, 00:08:52.039 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:52.039 "assigned_rate_limits": { 00:08:52.039 "rw_ios_per_sec": 0, 00:08:52.039 "rw_mbytes_per_sec": 0, 00:08:52.039 "r_mbytes_per_sec": 0, 00:08:52.039 "w_mbytes_per_sec": 0 00:08:52.039 }, 00:08:52.039 "claimed": true, 00:08:52.039 "claim_type": "exclusive_write", 00:08:52.039 "zoned": false, 00:08:52.039 "supported_io_types": { 00:08:52.039 "read": true, 00:08:52.039 "write": true, 00:08:52.039 "unmap": true, 00:08:52.039 "flush": true, 00:08:52.039 "reset": true, 00:08:52.039 "nvme_admin": false, 00:08:52.039 "nvme_io": false, 00:08:52.039 "nvme_io_md": false, 00:08:52.039 "write_zeroes": true, 00:08:52.039 "zcopy": true, 00:08:52.039 "get_zone_info": false, 00:08:52.039 "zone_management": false, 00:08:52.039 "zone_append": false, 00:08:52.039 "compare": false, 00:08:52.039 "compare_and_write": false, 00:08:52.039 "abort": true, 00:08:52.039 "seek_hole": false, 00:08:52.039 "seek_data": false, 00:08:52.039 "copy": true, 00:08:52.039 "nvme_iov_md": false 00:08:52.039 }, 00:08:52.039 "memory_domains": [ 00:08:52.039 { 00:08:52.039 "dma_device_id": "system", 00:08:52.039 "dma_device_type": 1 00:08:52.039 }, 00:08:52.039 { 00:08:52.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.039 "dma_device_type": 2 00:08:52.039 } 00:08:52.039 ], 00:08:52.039 "driver_specific": {} 00:08:52.039 } 00:08:52.039 ] 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.039 "name": "Existed_Raid", 00:08:52.039 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:52.039 "strip_size_kb": 64, 00:08:52.039 "state": "configuring", 00:08:52.039 "raid_level": "raid0", 00:08:52.039 "superblock": true, 00:08:52.039 "num_base_bdevs": 3, 00:08:52.039 "num_base_bdevs_discovered": 2, 00:08:52.039 "num_base_bdevs_operational": 3, 00:08:52.039 "base_bdevs_list": [ 00:08:52.039 { 00:08:52.039 "name": "BaseBdev1", 00:08:52.039 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:52.039 "is_configured": true, 00:08:52.039 "data_offset": 2048, 00:08:52.039 "data_size": 63488 00:08:52.039 }, 00:08:52.039 { 00:08:52.039 "name": null, 00:08:52.039 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:52.039 "is_configured": false, 00:08:52.039 "data_offset": 0, 00:08:52.039 "data_size": 63488 00:08:52.039 }, 00:08:52.039 { 00:08:52.039 "name": "BaseBdev3", 00:08:52.039 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:52.039 "is_configured": true, 00:08:52.039 "data_offset": 2048, 00:08:52.039 "data_size": 63488 00:08:52.039 } 00:08:52.039 ] 00:08:52.039 }' 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.039 12:34:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.609 [2024-12-14 12:34:52.084872] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.609 "name": "Existed_Raid", 00:08:52.609 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:52.609 "strip_size_kb": 64, 00:08:52.609 "state": "configuring", 00:08:52.609 "raid_level": "raid0", 00:08:52.609 "superblock": true, 00:08:52.609 "num_base_bdevs": 3, 00:08:52.609 "num_base_bdevs_discovered": 1, 00:08:52.609 "num_base_bdevs_operational": 3, 00:08:52.609 "base_bdevs_list": [ 00:08:52.609 { 00:08:52.609 "name": "BaseBdev1", 00:08:52.609 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:52.609 "is_configured": true, 00:08:52.609 "data_offset": 2048, 00:08:52.609 "data_size": 63488 00:08:52.609 }, 00:08:52.609 { 00:08:52.609 "name": null, 00:08:52.609 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:52.609 "is_configured": false, 00:08:52.609 "data_offset": 0, 00:08:52.609 "data_size": 63488 00:08:52.609 }, 00:08:52.609 { 00:08:52.609 "name": null, 00:08:52.609 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:52.609 "is_configured": false, 00:08:52.609 "data_offset": 0, 00:08:52.609 "data_size": 63488 00:08:52.609 } 00:08:52.609 ] 00:08:52.609 }' 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.609 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.869 [2024-12-14 12:34:52.584065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.869 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.129 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.129 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.129 "name": "Existed_Raid", 00:08:53.129 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:53.129 "strip_size_kb": 64, 00:08:53.129 "state": "configuring", 00:08:53.129 "raid_level": "raid0", 00:08:53.129 "superblock": true, 00:08:53.129 "num_base_bdevs": 3, 00:08:53.129 "num_base_bdevs_discovered": 2, 00:08:53.129 "num_base_bdevs_operational": 3, 00:08:53.129 "base_bdevs_list": [ 00:08:53.129 { 00:08:53.129 "name": "BaseBdev1", 00:08:53.129 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:53.129 "is_configured": true, 00:08:53.129 "data_offset": 2048, 00:08:53.129 "data_size": 63488 00:08:53.129 }, 00:08:53.129 { 00:08:53.129 "name": null, 00:08:53.129 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:53.129 "is_configured": false, 00:08:53.129 "data_offset": 0, 00:08:53.129 "data_size": 63488 00:08:53.129 }, 00:08:53.129 { 00:08:53.129 "name": "BaseBdev3", 00:08:53.129 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:53.129 "is_configured": true, 00:08:53.129 "data_offset": 2048, 00:08:53.129 "data_size": 63488 00:08:53.129 } 00:08:53.129 ] 00:08:53.129 }' 00:08:53.129 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.129 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.389 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.389 12:34:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:53.389 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.389 12:34:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.389 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.389 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:53.389 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:53.389 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.389 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.389 [2024-12-14 12:34:53.039238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.648 "name": "Existed_Raid", 00:08:53.648 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:53.648 "strip_size_kb": 64, 00:08:53.648 "state": "configuring", 00:08:53.648 "raid_level": "raid0", 00:08:53.648 "superblock": true, 00:08:53.648 "num_base_bdevs": 3, 00:08:53.648 "num_base_bdevs_discovered": 1, 00:08:53.648 "num_base_bdevs_operational": 3, 00:08:53.648 "base_bdevs_list": [ 00:08:53.648 { 00:08:53.648 "name": null, 00:08:53.648 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:53.648 "is_configured": false, 00:08:53.648 "data_offset": 0, 00:08:53.648 "data_size": 63488 00:08:53.648 }, 00:08:53.648 { 00:08:53.648 "name": null, 00:08:53.648 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:53.648 "is_configured": false, 00:08:53.648 "data_offset": 0, 00:08:53.648 "data_size": 63488 00:08:53.648 }, 00:08:53.648 { 00:08:53.648 "name": "BaseBdev3", 00:08:53.648 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:53.648 "is_configured": true, 00:08:53.648 "data_offset": 2048, 00:08:53.648 "data_size": 63488 00:08:53.648 } 00:08:53.648 ] 00:08:53.648 }' 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.648 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:53.908 [2024-12-14 12:34:53.626558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.908 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.167 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.167 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.167 "name": "Existed_Raid", 00:08:54.167 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:54.167 "strip_size_kb": 64, 00:08:54.167 "state": "configuring", 00:08:54.167 "raid_level": "raid0", 00:08:54.167 "superblock": true, 00:08:54.167 "num_base_bdevs": 3, 00:08:54.167 "num_base_bdevs_discovered": 2, 00:08:54.167 "num_base_bdevs_operational": 3, 00:08:54.167 "base_bdevs_list": [ 00:08:54.167 { 00:08:54.167 "name": null, 00:08:54.167 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:54.167 "is_configured": false, 00:08:54.167 "data_offset": 0, 00:08:54.167 "data_size": 63488 00:08:54.167 }, 00:08:54.167 { 00:08:54.167 "name": "BaseBdev2", 00:08:54.167 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:54.167 "is_configured": true, 00:08:54.167 "data_offset": 2048, 00:08:54.167 "data_size": 63488 00:08:54.167 }, 00:08:54.167 { 00:08:54.167 "name": "BaseBdev3", 00:08:54.167 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:54.167 "is_configured": true, 00:08:54.167 "data_offset": 2048, 00:08:54.167 "data_size": 63488 00:08:54.167 } 00:08:54.167 ] 00:08:54.167 }' 00:08:54.167 12:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.167 12:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:54.426 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a7055085-816e-488c-b812-938e0bdcb62e 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.686 [2024-12-14 12:34:54.203036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:54.686 [2024-12-14 12:34:54.203279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:54.686 [2024-12-14 12:34:54.203296] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:54.686 [2024-12-14 12:34:54.203592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:54.686 [2024-12-14 12:34:54.203755] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:54.686 [2024-12-14 12:34:54.203767] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:54.686 NewBaseBdev 00:08:54.686 [2024-12-14 12:34:54.203904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.686 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.686 [ 00:08:54.686 { 00:08:54.686 "name": "NewBaseBdev", 00:08:54.686 "aliases": [ 00:08:54.686 "a7055085-816e-488c-b812-938e0bdcb62e" 00:08:54.686 ], 00:08:54.686 "product_name": "Malloc disk", 00:08:54.686 "block_size": 512, 00:08:54.686 "num_blocks": 65536, 00:08:54.686 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:54.686 "assigned_rate_limits": { 00:08:54.686 "rw_ios_per_sec": 0, 00:08:54.686 "rw_mbytes_per_sec": 0, 00:08:54.686 "r_mbytes_per_sec": 0, 00:08:54.686 "w_mbytes_per_sec": 0 00:08:54.686 }, 00:08:54.686 "claimed": true, 00:08:54.686 "claim_type": "exclusive_write", 00:08:54.686 "zoned": false, 00:08:54.686 "supported_io_types": { 00:08:54.686 "read": true, 00:08:54.686 "write": true, 00:08:54.686 "unmap": true, 00:08:54.686 "flush": true, 00:08:54.686 "reset": true, 00:08:54.686 "nvme_admin": false, 00:08:54.686 "nvme_io": false, 00:08:54.686 "nvme_io_md": false, 00:08:54.687 "write_zeroes": true, 00:08:54.687 "zcopy": true, 00:08:54.687 "get_zone_info": false, 00:08:54.687 "zone_management": false, 00:08:54.687 "zone_append": false, 00:08:54.687 "compare": false, 00:08:54.687 "compare_and_write": false, 00:08:54.687 "abort": true, 00:08:54.687 "seek_hole": false, 00:08:54.687 "seek_data": false, 00:08:54.687 "copy": true, 00:08:54.687 "nvme_iov_md": false 00:08:54.687 }, 00:08:54.687 "memory_domains": [ 00:08:54.687 { 00:08:54.687 "dma_device_id": "system", 00:08:54.687 "dma_device_type": 1 00:08:54.687 }, 00:08:54.687 { 00:08:54.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.687 "dma_device_type": 2 00:08:54.687 } 00:08:54.687 ], 00:08:54.687 "driver_specific": {} 00:08:54.687 } 00:08:54.687 ] 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.687 "name": "Existed_Raid", 00:08:54.687 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:54.687 "strip_size_kb": 64, 00:08:54.687 "state": "online", 00:08:54.687 "raid_level": "raid0", 00:08:54.687 "superblock": true, 00:08:54.687 "num_base_bdevs": 3, 00:08:54.687 "num_base_bdevs_discovered": 3, 00:08:54.687 "num_base_bdevs_operational": 3, 00:08:54.687 "base_bdevs_list": [ 00:08:54.687 { 00:08:54.687 "name": "NewBaseBdev", 00:08:54.687 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:54.687 "is_configured": true, 00:08:54.687 "data_offset": 2048, 00:08:54.687 "data_size": 63488 00:08:54.687 }, 00:08:54.687 { 00:08:54.687 "name": "BaseBdev2", 00:08:54.687 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:54.687 "is_configured": true, 00:08:54.687 "data_offset": 2048, 00:08:54.687 "data_size": 63488 00:08:54.687 }, 00:08:54.687 { 00:08:54.687 "name": "BaseBdev3", 00:08:54.687 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:54.687 "is_configured": true, 00:08:54.687 "data_offset": 2048, 00:08:54.687 "data_size": 63488 00:08:54.687 } 00:08:54.687 ] 00:08:54.687 }' 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.687 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.255 [2024-12-14 12:34:54.718578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:55.255 "name": "Existed_Raid", 00:08:55.255 "aliases": [ 00:08:55.255 "1edbac65-af6d-40fd-8123-cd676188d973" 00:08:55.255 ], 00:08:55.255 "product_name": "Raid Volume", 00:08:55.255 "block_size": 512, 00:08:55.255 "num_blocks": 190464, 00:08:55.255 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:55.255 "assigned_rate_limits": { 00:08:55.255 "rw_ios_per_sec": 0, 00:08:55.255 "rw_mbytes_per_sec": 0, 00:08:55.255 "r_mbytes_per_sec": 0, 00:08:55.255 "w_mbytes_per_sec": 0 00:08:55.255 }, 00:08:55.255 "claimed": false, 00:08:55.255 "zoned": false, 00:08:55.255 "supported_io_types": { 00:08:55.255 "read": true, 00:08:55.255 "write": true, 00:08:55.255 "unmap": true, 00:08:55.255 "flush": true, 00:08:55.255 "reset": true, 00:08:55.255 "nvme_admin": false, 00:08:55.255 "nvme_io": false, 00:08:55.255 "nvme_io_md": false, 00:08:55.255 "write_zeroes": true, 00:08:55.255 "zcopy": false, 00:08:55.255 "get_zone_info": false, 00:08:55.255 "zone_management": false, 00:08:55.255 "zone_append": false, 00:08:55.255 "compare": false, 00:08:55.255 "compare_and_write": false, 00:08:55.255 "abort": false, 00:08:55.255 "seek_hole": false, 00:08:55.255 "seek_data": false, 00:08:55.255 "copy": false, 00:08:55.255 "nvme_iov_md": false 00:08:55.255 }, 00:08:55.255 "memory_domains": [ 00:08:55.255 { 00:08:55.255 "dma_device_id": "system", 00:08:55.255 "dma_device_type": 1 00:08:55.255 }, 00:08:55.255 { 00:08:55.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.255 "dma_device_type": 2 00:08:55.255 }, 00:08:55.255 { 00:08:55.255 "dma_device_id": "system", 00:08:55.255 "dma_device_type": 1 00:08:55.255 }, 00:08:55.255 { 00:08:55.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.255 "dma_device_type": 2 00:08:55.255 }, 00:08:55.255 { 00:08:55.255 "dma_device_id": "system", 00:08:55.255 "dma_device_type": 1 00:08:55.255 }, 00:08:55.255 { 00:08:55.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.255 "dma_device_type": 2 00:08:55.255 } 00:08:55.255 ], 00:08:55.255 "driver_specific": { 00:08:55.255 "raid": { 00:08:55.255 "uuid": "1edbac65-af6d-40fd-8123-cd676188d973", 00:08:55.255 "strip_size_kb": 64, 00:08:55.255 "state": "online", 00:08:55.255 "raid_level": "raid0", 00:08:55.255 "superblock": true, 00:08:55.255 "num_base_bdevs": 3, 00:08:55.255 "num_base_bdevs_discovered": 3, 00:08:55.255 "num_base_bdevs_operational": 3, 00:08:55.255 "base_bdevs_list": [ 00:08:55.255 { 00:08:55.255 "name": "NewBaseBdev", 00:08:55.255 "uuid": "a7055085-816e-488c-b812-938e0bdcb62e", 00:08:55.255 "is_configured": true, 00:08:55.255 "data_offset": 2048, 00:08:55.255 "data_size": 63488 00:08:55.255 }, 00:08:55.255 { 00:08:55.255 "name": "BaseBdev2", 00:08:55.255 "uuid": "4529d7f9-742a-4db7-ab6e-7bc651680134", 00:08:55.255 "is_configured": true, 00:08:55.255 "data_offset": 2048, 00:08:55.255 "data_size": 63488 00:08:55.255 }, 00:08:55.255 { 00:08:55.255 "name": "BaseBdev3", 00:08:55.255 "uuid": "87ea47bc-7dbc-41af-9a0f-be826844a0d6", 00:08:55.255 "is_configured": true, 00:08:55.255 "data_offset": 2048, 00:08:55.255 "data_size": 63488 00:08:55.255 } 00:08:55.255 ] 00:08:55.255 } 00:08:55.255 } 00:08:55.255 }' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:55.255 BaseBdev2 00:08:55.255 BaseBdev3' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.255 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:55.255 [2024-12-14 12:34:54.969810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:55.255 [2024-12-14 12:34:54.969881] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.256 [2024-12-14 12:34:54.969987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.256 [2024-12-14 12:34:54.970043] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.256 [2024-12-14 12:34:54.970071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:55.256 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.256 12:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66240 00:08:55.256 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66240 ']' 00:08:55.256 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66240 00:08:55.256 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:55.256 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.256 12:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66240 00:08:55.522 killing process with pid 66240 00:08:55.522 12:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.522 12:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.522 12:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66240' 00:08:55.522 12:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66240 00:08:55.522 [2024-12-14 12:34:55.018763] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.522 12:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66240 00:08:55.783 [2024-12-14 12:34:55.322526] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.722 ************************************ 00:08:56.722 END TEST raid_state_function_test_sb 00:08:56.722 ************************************ 00:08:56.722 12:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:56.722 00:08:56.722 real 0m10.684s 00:08:56.722 user 0m17.083s 00:08:56.722 sys 0m1.835s 00:08:56.722 12:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.722 12:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.982 12:34:56 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:56.982 12:34:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:56.982 12:34:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.982 12:34:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.982 ************************************ 00:08:56.982 START TEST raid_superblock_test 00:08:56.982 ************************************ 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66866 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66866 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66866 ']' 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:56.982 12:34:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.982 [2024-12-14 12:34:56.593103] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:56.982 [2024-12-14 12:34:56.593222] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66866 ] 00:08:57.242 [2024-12-14 12:34:56.768564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.242 [2024-12-14 12:34:56.887990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.502 [2024-12-14 12:34:57.092877] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.502 [2024-12-14 12:34:57.093022] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.762 malloc1 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.762 [2024-12-14 12:34:57.463491] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:57.762 [2024-12-14 12:34:57.463558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.762 [2024-12-14 12:34:57.463595] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:57.762 [2024-12-14 12:34:57.463604] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.762 [2024-12-14 12:34:57.465692] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.762 [2024-12-14 12:34:57.465729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:57.762 pt1 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.762 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.022 malloc2 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.022 [2024-12-14 12:34:57.516971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:58.022 [2024-12-14 12:34:57.517093] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.022 [2024-12-14 12:34:57.517154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:58.022 [2024-12-14 12:34:57.517190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.022 [2024-12-14 12:34:57.519371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.022 [2024-12-14 12:34:57.519438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:58.022 pt2 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.022 malloc3 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.022 [2024-12-14 12:34:57.589471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:58.022 [2024-12-14 12:34:57.589565] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.022 [2024-12-14 12:34:57.589618] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:58.022 [2024-12-14 12:34:57.589651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.022 [2024-12-14 12:34:57.591777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.022 [2024-12-14 12:34:57.591843] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:58.022 pt3 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.022 [2024-12-14 12:34:57.601496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.022 [2024-12-14 12:34:57.603310] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.022 [2024-12-14 12:34:57.603377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:58.022 [2024-12-14 12:34:57.603528] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:58.022 [2024-12-14 12:34:57.603543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:58.022 [2024-12-14 12:34:57.603798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:58.022 [2024-12-14 12:34:57.603959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:58.022 [2024-12-14 12:34:57.603969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:58.022 [2024-12-14 12:34:57.604134] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.022 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.022 "name": "raid_bdev1", 00:08:58.022 "uuid": "80c8a42c-f421-4316-8dd4-23131164ad9d", 00:08:58.022 "strip_size_kb": 64, 00:08:58.022 "state": "online", 00:08:58.022 "raid_level": "raid0", 00:08:58.022 "superblock": true, 00:08:58.022 "num_base_bdevs": 3, 00:08:58.022 "num_base_bdevs_discovered": 3, 00:08:58.022 "num_base_bdevs_operational": 3, 00:08:58.022 "base_bdevs_list": [ 00:08:58.022 { 00:08:58.022 "name": "pt1", 00:08:58.022 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.022 "is_configured": true, 00:08:58.022 "data_offset": 2048, 00:08:58.022 "data_size": 63488 00:08:58.022 }, 00:08:58.022 { 00:08:58.022 "name": "pt2", 00:08:58.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.022 "is_configured": true, 00:08:58.022 "data_offset": 2048, 00:08:58.022 "data_size": 63488 00:08:58.022 }, 00:08:58.022 { 00:08:58.022 "name": "pt3", 00:08:58.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.023 "is_configured": true, 00:08:58.023 "data_offset": 2048, 00:08:58.023 "data_size": 63488 00:08:58.023 } 00:08:58.023 ] 00:08:58.023 }' 00:08:58.023 12:34:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.023 12:34:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.591 [2024-12-14 12:34:58.053034] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:58.591 "name": "raid_bdev1", 00:08:58.591 "aliases": [ 00:08:58.591 "80c8a42c-f421-4316-8dd4-23131164ad9d" 00:08:58.591 ], 00:08:58.591 "product_name": "Raid Volume", 00:08:58.591 "block_size": 512, 00:08:58.591 "num_blocks": 190464, 00:08:58.591 "uuid": "80c8a42c-f421-4316-8dd4-23131164ad9d", 00:08:58.591 "assigned_rate_limits": { 00:08:58.591 "rw_ios_per_sec": 0, 00:08:58.591 "rw_mbytes_per_sec": 0, 00:08:58.591 "r_mbytes_per_sec": 0, 00:08:58.591 "w_mbytes_per_sec": 0 00:08:58.591 }, 00:08:58.591 "claimed": false, 00:08:58.591 "zoned": false, 00:08:58.591 "supported_io_types": { 00:08:58.591 "read": true, 00:08:58.591 "write": true, 00:08:58.591 "unmap": true, 00:08:58.591 "flush": true, 00:08:58.591 "reset": true, 00:08:58.591 "nvme_admin": false, 00:08:58.591 "nvme_io": false, 00:08:58.591 "nvme_io_md": false, 00:08:58.591 "write_zeroes": true, 00:08:58.591 "zcopy": false, 00:08:58.591 "get_zone_info": false, 00:08:58.591 "zone_management": false, 00:08:58.591 "zone_append": false, 00:08:58.591 "compare": false, 00:08:58.591 "compare_and_write": false, 00:08:58.591 "abort": false, 00:08:58.591 "seek_hole": false, 00:08:58.591 "seek_data": false, 00:08:58.591 "copy": false, 00:08:58.591 "nvme_iov_md": false 00:08:58.591 }, 00:08:58.591 "memory_domains": [ 00:08:58.591 { 00:08:58.591 "dma_device_id": "system", 00:08:58.591 "dma_device_type": 1 00:08:58.591 }, 00:08:58.591 { 00:08:58.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.591 "dma_device_type": 2 00:08:58.591 }, 00:08:58.591 { 00:08:58.591 "dma_device_id": "system", 00:08:58.591 "dma_device_type": 1 00:08:58.591 }, 00:08:58.591 { 00:08:58.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.591 "dma_device_type": 2 00:08:58.591 }, 00:08:58.591 { 00:08:58.591 "dma_device_id": "system", 00:08:58.591 "dma_device_type": 1 00:08:58.591 }, 00:08:58.591 { 00:08:58.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.591 "dma_device_type": 2 00:08:58.591 } 00:08:58.591 ], 00:08:58.591 "driver_specific": { 00:08:58.591 "raid": { 00:08:58.591 "uuid": "80c8a42c-f421-4316-8dd4-23131164ad9d", 00:08:58.591 "strip_size_kb": 64, 00:08:58.591 "state": "online", 00:08:58.591 "raid_level": "raid0", 00:08:58.591 "superblock": true, 00:08:58.591 "num_base_bdevs": 3, 00:08:58.591 "num_base_bdevs_discovered": 3, 00:08:58.591 "num_base_bdevs_operational": 3, 00:08:58.591 "base_bdevs_list": [ 00:08:58.591 { 00:08:58.591 "name": "pt1", 00:08:58.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:58.591 "is_configured": true, 00:08:58.591 "data_offset": 2048, 00:08:58.591 "data_size": 63488 00:08:58.591 }, 00:08:58.591 { 00:08:58.591 "name": "pt2", 00:08:58.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.591 "is_configured": true, 00:08:58.591 "data_offset": 2048, 00:08:58.591 "data_size": 63488 00:08:58.591 }, 00:08:58.591 { 00:08:58.591 "name": "pt3", 00:08:58.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:58.591 "is_configured": true, 00:08:58.591 "data_offset": 2048, 00:08:58.591 "data_size": 63488 00:08:58.591 } 00:08:58.591 ] 00:08:58.591 } 00:08:58.591 } 00:08:58.591 }' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:58.591 pt2 00:08:58.591 pt3' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.591 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.592 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.852 [2024-12-14 12:34:58.356468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=80c8a42c-f421-4316-8dd4-23131164ad9d 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 80c8a42c-f421-4316-8dd4-23131164ad9d ']' 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.852 [2024-12-14 12:34:58.400146] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.852 [2024-12-14 12:34:58.400216] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.852 [2024-12-14 12:34:58.400321] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.852 [2024-12-14 12:34:58.400402] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.852 [2024-12-14 12:34:58.400485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:58.852 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.853 [2024-12-14 12:34:58.539936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:58.853 [2024-12-14 12:34:58.541806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:58.853 [2024-12-14 12:34:58.541899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:58.853 [2024-12-14 12:34:58.541969] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:58.853 [2024-12-14 12:34:58.542082] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:58.853 [2024-12-14 12:34:58.542175] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:58.853 [2024-12-14 12:34:58.542235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.853 [2024-12-14 12:34:58.542283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:58.853 request: 00:08:58.853 { 00:08:58.853 "name": "raid_bdev1", 00:08:58.853 "raid_level": "raid0", 00:08:58.853 "base_bdevs": [ 00:08:58.853 "malloc1", 00:08:58.853 "malloc2", 00:08:58.853 "malloc3" 00:08:58.853 ], 00:08:58.853 "strip_size_kb": 64, 00:08:58.853 "superblock": false, 00:08:58.853 "method": "bdev_raid_create", 00:08:58.853 "req_id": 1 00:08:58.853 } 00:08:58.853 Got JSON-RPC error response 00:08:58.853 response: 00:08:58.853 { 00:08:58.853 "code": -17, 00:08:58.853 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:58.853 } 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:58.853 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.112 [2024-12-14 12:34:58.603789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:59.112 [2024-12-14 12:34:58.603848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.112 [2024-12-14 12:34:58.603868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:59.112 [2024-12-14 12:34:58.603876] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.112 [2024-12-14 12:34:58.606225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.112 [2024-12-14 12:34:58.606304] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:59.112 [2024-12-14 12:34:58.606400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:59.112 [2024-12-14 12:34:58.606466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:59.112 pt1 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.112 "name": "raid_bdev1", 00:08:59.112 "uuid": "80c8a42c-f421-4316-8dd4-23131164ad9d", 00:08:59.112 "strip_size_kb": 64, 00:08:59.112 "state": "configuring", 00:08:59.112 "raid_level": "raid0", 00:08:59.112 "superblock": true, 00:08:59.112 "num_base_bdevs": 3, 00:08:59.112 "num_base_bdevs_discovered": 1, 00:08:59.112 "num_base_bdevs_operational": 3, 00:08:59.112 "base_bdevs_list": [ 00:08:59.112 { 00:08:59.112 "name": "pt1", 00:08:59.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.112 "is_configured": true, 00:08:59.112 "data_offset": 2048, 00:08:59.112 "data_size": 63488 00:08:59.112 }, 00:08:59.112 { 00:08:59.112 "name": null, 00:08:59.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.112 "is_configured": false, 00:08:59.112 "data_offset": 2048, 00:08:59.112 "data_size": 63488 00:08:59.112 }, 00:08:59.112 { 00:08:59.112 "name": null, 00:08:59.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.112 "is_configured": false, 00:08:59.112 "data_offset": 2048, 00:08:59.112 "data_size": 63488 00:08:59.112 } 00:08:59.112 ] 00:08:59.112 }' 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.112 12:34:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.372 [2024-12-14 12:34:59.082977] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.372 [2024-12-14 12:34:59.083133] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.372 [2024-12-14 12:34:59.083180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:59.372 [2024-12-14 12:34:59.083212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.372 [2024-12-14 12:34:59.083674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.372 [2024-12-14 12:34:59.083729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.372 [2024-12-14 12:34:59.083847] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.372 [2024-12-14 12:34:59.083906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.372 pt2 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.372 [2024-12-14 12:34:59.090959] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.372 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.632 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.632 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.632 "name": "raid_bdev1", 00:08:59.632 "uuid": "80c8a42c-f421-4316-8dd4-23131164ad9d", 00:08:59.632 "strip_size_kb": 64, 00:08:59.632 "state": "configuring", 00:08:59.632 "raid_level": "raid0", 00:08:59.632 "superblock": true, 00:08:59.632 "num_base_bdevs": 3, 00:08:59.632 "num_base_bdevs_discovered": 1, 00:08:59.632 "num_base_bdevs_operational": 3, 00:08:59.632 "base_bdevs_list": [ 00:08:59.632 { 00:08:59.632 "name": "pt1", 00:08:59.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.632 "is_configured": true, 00:08:59.632 "data_offset": 2048, 00:08:59.632 "data_size": 63488 00:08:59.632 }, 00:08:59.632 { 00:08:59.632 "name": null, 00:08:59.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.632 "is_configured": false, 00:08:59.632 "data_offset": 0, 00:08:59.632 "data_size": 63488 00:08:59.632 }, 00:08:59.632 { 00:08:59.632 "name": null, 00:08:59.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.632 "is_configured": false, 00:08:59.632 "data_offset": 2048, 00:08:59.632 "data_size": 63488 00:08:59.632 } 00:08:59.632 ] 00:08:59.632 }' 00:08:59.632 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.632 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.920 [2024-12-14 12:34:59.538214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:59.920 [2024-12-14 12:34:59.538285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.920 [2024-12-14 12:34:59.538302] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:59.920 [2024-12-14 12:34:59.538313] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.920 [2024-12-14 12:34:59.538764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.920 [2024-12-14 12:34:59.538790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:59.920 [2024-12-14 12:34:59.538873] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:59.920 [2024-12-14 12:34:59.538904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:59.920 pt2 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.920 [2024-12-14 12:34:59.550183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:59.920 [2024-12-14 12:34:59.550232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:59.920 [2024-12-14 12:34:59.550245] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:59.920 [2024-12-14 12:34:59.550256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:59.920 [2024-12-14 12:34:59.550621] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:59.920 [2024-12-14 12:34:59.550642] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:59.920 [2024-12-14 12:34:59.550700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:59.920 [2024-12-14 12:34:59.550720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:59.920 [2024-12-14 12:34:59.550835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.920 [2024-12-14 12:34:59.550847] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.920 [2024-12-14 12:34:59.551088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:59.920 [2024-12-14 12:34:59.551231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.920 [2024-12-14 12:34:59.551246] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:59.920 [2024-12-14 12:34:59.551390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.920 pt3 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.920 "name": "raid_bdev1", 00:08:59.920 "uuid": "80c8a42c-f421-4316-8dd4-23131164ad9d", 00:08:59.920 "strip_size_kb": 64, 00:08:59.920 "state": "online", 00:08:59.920 "raid_level": "raid0", 00:08:59.920 "superblock": true, 00:08:59.920 "num_base_bdevs": 3, 00:08:59.920 "num_base_bdevs_discovered": 3, 00:08:59.920 "num_base_bdevs_operational": 3, 00:08:59.920 "base_bdevs_list": [ 00:08:59.920 { 00:08:59.920 "name": "pt1", 00:08:59.920 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:59.920 "is_configured": true, 00:08:59.920 "data_offset": 2048, 00:08:59.920 "data_size": 63488 00:08:59.920 }, 00:08:59.920 { 00:08:59.920 "name": "pt2", 00:08:59.920 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:59.920 "is_configured": true, 00:08:59.920 "data_offset": 2048, 00:08:59.920 "data_size": 63488 00:08:59.920 }, 00:08:59.920 { 00:08:59.920 "name": "pt3", 00:08:59.920 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:59.920 "is_configured": true, 00:08:59.920 "data_offset": 2048, 00:08:59.920 "data_size": 63488 00:08:59.920 } 00:08:59.920 ] 00:08:59.920 }' 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.920 12:34:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:00.518 [2024-12-14 12:35:00.041681] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.518 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:00.518 "name": "raid_bdev1", 00:09:00.518 "aliases": [ 00:09:00.518 "80c8a42c-f421-4316-8dd4-23131164ad9d" 00:09:00.518 ], 00:09:00.518 "product_name": "Raid Volume", 00:09:00.518 "block_size": 512, 00:09:00.518 "num_blocks": 190464, 00:09:00.518 "uuid": "80c8a42c-f421-4316-8dd4-23131164ad9d", 00:09:00.518 "assigned_rate_limits": { 00:09:00.518 "rw_ios_per_sec": 0, 00:09:00.518 "rw_mbytes_per_sec": 0, 00:09:00.518 "r_mbytes_per_sec": 0, 00:09:00.518 "w_mbytes_per_sec": 0 00:09:00.518 }, 00:09:00.518 "claimed": false, 00:09:00.518 "zoned": false, 00:09:00.518 "supported_io_types": { 00:09:00.518 "read": true, 00:09:00.518 "write": true, 00:09:00.518 "unmap": true, 00:09:00.518 "flush": true, 00:09:00.518 "reset": true, 00:09:00.518 "nvme_admin": false, 00:09:00.518 "nvme_io": false, 00:09:00.518 "nvme_io_md": false, 00:09:00.518 "write_zeroes": true, 00:09:00.518 "zcopy": false, 00:09:00.518 "get_zone_info": false, 00:09:00.518 "zone_management": false, 00:09:00.518 "zone_append": false, 00:09:00.518 "compare": false, 00:09:00.518 "compare_and_write": false, 00:09:00.518 "abort": false, 00:09:00.518 "seek_hole": false, 00:09:00.518 "seek_data": false, 00:09:00.518 "copy": false, 00:09:00.518 "nvme_iov_md": false 00:09:00.518 }, 00:09:00.518 "memory_domains": [ 00:09:00.518 { 00:09:00.518 "dma_device_id": "system", 00:09:00.518 "dma_device_type": 1 00:09:00.518 }, 00:09:00.518 { 00:09:00.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.518 "dma_device_type": 2 00:09:00.518 }, 00:09:00.518 { 00:09:00.518 "dma_device_id": "system", 00:09:00.518 "dma_device_type": 1 00:09:00.518 }, 00:09:00.518 { 00:09:00.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.518 "dma_device_type": 2 00:09:00.518 }, 00:09:00.518 { 00:09:00.519 "dma_device_id": "system", 00:09:00.519 "dma_device_type": 1 00:09:00.519 }, 00:09:00.519 { 00:09:00.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.519 "dma_device_type": 2 00:09:00.519 } 00:09:00.519 ], 00:09:00.519 "driver_specific": { 00:09:00.519 "raid": { 00:09:00.519 "uuid": "80c8a42c-f421-4316-8dd4-23131164ad9d", 00:09:00.519 "strip_size_kb": 64, 00:09:00.519 "state": "online", 00:09:00.519 "raid_level": "raid0", 00:09:00.519 "superblock": true, 00:09:00.519 "num_base_bdevs": 3, 00:09:00.519 "num_base_bdevs_discovered": 3, 00:09:00.519 "num_base_bdevs_operational": 3, 00:09:00.519 "base_bdevs_list": [ 00:09:00.519 { 00:09:00.519 "name": "pt1", 00:09:00.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:00.519 "is_configured": true, 00:09:00.519 "data_offset": 2048, 00:09:00.519 "data_size": 63488 00:09:00.519 }, 00:09:00.519 { 00:09:00.519 "name": "pt2", 00:09:00.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:00.519 "is_configured": true, 00:09:00.519 "data_offset": 2048, 00:09:00.519 "data_size": 63488 00:09:00.519 }, 00:09:00.519 { 00:09:00.519 "name": "pt3", 00:09:00.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:00.519 "is_configured": true, 00:09:00.519 "data_offset": 2048, 00:09:00.519 "data_size": 63488 00:09:00.519 } 00:09:00.519 ] 00:09:00.519 } 00:09:00.519 } 00:09:00.519 }' 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:00.519 pt2 00:09:00.519 pt3' 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.519 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.828 [2024-12-14 12:35:00.337139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 80c8a42c-f421-4316-8dd4-23131164ad9d '!=' 80c8a42c-f421-4316-8dd4-23131164ad9d ']' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66866 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66866 ']' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66866 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66866 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.828 killing process with pid 66866 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66866' 00:09:00.828 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66866 00:09:00.828 [2024-12-14 12:35:00.424084] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:00.828 [2024-12-14 12:35:00.424182] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.829 [2024-12-14 12:35:00.424244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:00.829 [2024-12-14 12:35:00.424257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:00.829 12:35:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66866 00:09:01.088 [2024-12-14 12:35:00.721748] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:02.470 12:35:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:02.470 00:09:02.470 real 0m5.343s 00:09:02.470 user 0m7.806s 00:09:02.470 sys 0m0.853s 00:09:02.470 ************************************ 00:09:02.470 END TEST raid_superblock_test 00:09:02.470 ************************************ 00:09:02.470 12:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.470 12:35:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.470 12:35:01 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:02.470 12:35:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:02.470 12:35:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.470 12:35:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:02.470 ************************************ 00:09:02.470 START TEST raid_read_error_test 00:09:02.470 ************************************ 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.iIT3I5Fq4P 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67119 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67119 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67119 ']' 00:09:02.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:02.470 12:35:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.470 [2024-12-14 12:35:02.018543] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:02.470 [2024-12-14 12:35:02.018654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67119 ] 00:09:02.470 [2024-12-14 12:35:02.193687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.730 [2024-12-14 12:35:02.309573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.990 [2024-12-14 12:35:02.513105] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.990 [2024-12-14 12:35:02.513169] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 BaseBdev1_malloc 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 true 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 [2024-12-14 12:35:02.910208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:03.251 [2024-12-14 12:35:02.910323] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.251 [2024-12-14 12:35:02.910358] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:03.251 [2024-12-14 12:35:02.910372] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.251 [2024-12-14 12:35:02.912538] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.251 [2024-12-14 12:35:02.912578] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:03.251 BaseBdev1 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 BaseBdev2_malloc 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 true 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.251 [2024-12-14 12:35:02.978319] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:03.251 [2024-12-14 12:35:02.978441] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.251 [2024-12-14 12:35:02.978465] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:03.251 [2024-12-14 12:35:02.978477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.251 [2024-12-14 12:35:02.980664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.251 [2024-12-14 12:35:02.980704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:03.251 BaseBdev2 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.251 12:35:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.511 BaseBdev3_malloc 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.511 true 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.511 [2024-12-14 12:35:03.058874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:03.511 [2024-12-14 12:35:03.058938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:03.511 [2024-12-14 12:35:03.058960] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:03.511 [2024-12-14 12:35:03.058971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:03.511 [2024-12-14 12:35:03.061329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:03.511 [2024-12-14 12:35:03.061409] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:03.511 BaseBdev3 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.511 [2024-12-14 12:35:03.070926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.511 [2024-12-14 12:35:03.072787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.511 [2024-12-14 12:35:03.072904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:03.511 [2024-12-14 12:35:03.073193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:03.511 [2024-12-14 12:35:03.073249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:03.511 [2024-12-14 12:35:03.073557] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:03.511 [2024-12-14 12:35:03.073796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:03.511 [2024-12-14 12:35:03.073848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:03.511 [2024-12-14 12:35:03.074109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.511 "name": "raid_bdev1", 00:09:03.511 "uuid": "c299be59-59be-488a-aac6-33e59d5b4647", 00:09:03.511 "strip_size_kb": 64, 00:09:03.511 "state": "online", 00:09:03.511 "raid_level": "raid0", 00:09:03.511 "superblock": true, 00:09:03.511 "num_base_bdevs": 3, 00:09:03.511 "num_base_bdevs_discovered": 3, 00:09:03.511 "num_base_bdevs_operational": 3, 00:09:03.511 "base_bdevs_list": [ 00:09:03.511 { 00:09:03.511 "name": "BaseBdev1", 00:09:03.511 "uuid": "7302266a-54e9-5fdd-8af1-514e2823900a", 00:09:03.511 "is_configured": true, 00:09:03.511 "data_offset": 2048, 00:09:03.511 "data_size": 63488 00:09:03.511 }, 00:09:03.511 { 00:09:03.511 "name": "BaseBdev2", 00:09:03.511 "uuid": "4a820829-52f3-5508-8193-4225245078bd", 00:09:03.511 "is_configured": true, 00:09:03.511 "data_offset": 2048, 00:09:03.511 "data_size": 63488 00:09:03.511 }, 00:09:03.511 { 00:09:03.511 "name": "BaseBdev3", 00:09:03.511 "uuid": "f76d4f37-c8b2-51d8-9353-d10b02cb740f", 00:09:03.511 "is_configured": true, 00:09:03.511 "data_offset": 2048, 00:09:03.511 "data_size": 63488 00:09:03.511 } 00:09:03.511 ] 00:09:03.511 }' 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.511 12:35:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.081 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:04.081 12:35:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:04.081 [2024-12-14 12:35:03.655459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.021 "name": "raid_bdev1", 00:09:05.021 "uuid": "c299be59-59be-488a-aac6-33e59d5b4647", 00:09:05.021 "strip_size_kb": 64, 00:09:05.021 "state": "online", 00:09:05.021 "raid_level": "raid0", 00:09:05.021 "superblock": true, 00:09:05.021 "num_base_bdevs": 3, 00:09:05.021 "num_base_bdevs_discovered": 3, 00:09:05.021 "num_base_bdevs_operational": 3, 00:09:05.021 "base_bdevs_list": [ 00:09:05.021 { 00:09:05.021 "name": "BaseBdev1", 00:09:05.021 "uuid": "7302266a-54e9-5fdd-8af1-514e2823900a", 00:09:05.021 "is_configured": true, 00:09:05.021 "data_offset": 2048, 00:09:05.021 "data_size": 63488 00:09:05.021 }, 00:09:05.021 { 00:09:05.021 "name": "BaseBdev2", 00:09:05.021 "uuid": "4a820829-52f3-5508-8193-4225245078bd", 00:09:05.021 "is_configured": true, 00:09:05.021 "data_offset": 2048, 00:09:05.021 "data_size": 63488 00:09:05.021 }, 00:09:05.021 { 00:09:05.021 "name": "BaseBdev3", 00:09:05.021 "uuid": "f76d4f37-c8b2-51d8-9353-d10b02cb740f", 00:09:05.021 "is_configured": true, 00:09:05.021 "data_offset": 2048, 00:09:05.021 "data_size": 63488 00:09:05.021 } 00:09:05.021 ] 00:09:05.021 }' 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.021 12:35:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.591 [2024-12-14 12:35:05.031439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.591 [2024-12-14 12:35:05.031547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.591 [2024-12-14 12:35:05.034449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.591 [2024-12-14 12:35:05.034550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.591 [2024-12-14 12:35:05.034623] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.591 [2024-12-14 12:35:05.034685] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:05.591 { 00:09:05.591 "results": [ 00:09:05.591 { 00:09:05.591 "job": "raid_bdev1", 00:09:05.591 "core_mask": "0x1", 00:09:05.591 "workload": "randrw", 00:09:05.591 "percentage": 50, 00:09:05.591 "status": "finished", 00:09:05.591 "queue_depth": 1, 00:09:05.591 "io_size": 131072, 00:09:05.591 "runtime": 1.377053, 00:09:05.591 "iops": 15424.242930373777, 00:09:05.591 "mibps": 1928.0303662967221, 00:09:05.591 "io_failed": 1, 00:09:05.591 "io_timeout": 0, 00:09:05.591 "avg_latency_us": 89.80762614281642, 00:09:05.591 "min_latency_us": 18.445414847161572, 00:09:05.591 "max_latency_us": 1352.216593886463 00:09:05.591 } 00:09:05.591 ], 00:09:05.591 "core_count": 1 00:09:05.591 } 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67119 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67119 ']' 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67119 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67119 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67119' 00:09:05.591 killing process with pid 67119 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67119 00:09:05.591 [2024-12-14 12:35:05.069832] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.591 12:35:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67119 00:09:05.591 [2024-12-14 12:35:05.304106] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.iIT3I5Fq4P 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:06.970 00:09:06.970 real 0m4.585s 00:09:06.970 user 0m5.451s 00:09:06.970 sys 0m0.577s 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.970 ************************************ 00:09:06.970 END TEST raid_read_error_test 00:09:06.970 ************************************ 00:09:06.970 12:35:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 12:35:06 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:06.970 12:35:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:06.970 12:35:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.970 12:35:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.970 ************************************ 00:09:06.970 START TEST raid_write_error_test 00:09:06.970 ************************************ 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:06.970 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FRhJ6frmS4 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67265 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67265 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67265 ']' 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.971 12:35:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.971 [2024-12-14 12:35:06.673805] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:06.971 [2024-12-14 12:35:06.673998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67265 ] 00:09:07.230 [2024-12-14 12:35:06.829851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.230 [2024-12-14 12:35:06.945292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.489 [2024-12-14 12:35:07.148373] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.489 [2024-12-14 12:35:07.148516] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.057 BaseBdev1_malloc 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.057 true 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.057 [2024-12-14 12:35:07.573075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:08.057 [2024-12-14 12:35:07.573136] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.057 [2024-12-14 12:35:07.573174] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:08.057 [2024-12-14 12:35:07.573187] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.057 [2024-12-14 12:35:07.575542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.057 [2024-12-14 12:35:07.575585] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:08.057 BaseBdev1 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:08.057 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.058 BaseBdev2_malloc 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.058 true 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.058 [2024-12-14 12:35:07.639978] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:08.058 [2024-12-14 12:35:07.640034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.058 [2024-12-14 12:35:07.640064] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:08.058 [2024-12-14 12:35:07.640075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.058 [2024-12-14 12:35:07.642158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.058 [2024-12-14 12:35:07.642238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:08.058 BaseBdev2 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.058 BaseBdev3_malloc 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.058 true 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.058 [2024-12-14 12:35:07.724415] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:08.058 [2024-12-14 12:35:07.724472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.058 [2024-12-14 12:35:07.724505] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:08.058 [2024-12-14 12:35:07.724515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.058 [2024-12-14 12:35:07.726582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.058 [2024-12-14 12:35:07.726622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:08.058 BaseBdev3 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.058 [2024-12-14 12:35:07.736471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.058 [2024-12-14 12:35:07.738313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.058 [2024-12-14 12:35:07.738441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.058 [2024-12-14 12:35:07.738669] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:08.058 [2024-12-14 12:35:07.738718] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:08.058 [2024-12-14 12:35:07.738985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:08.058 [2024-12-14 12:35:07.739193] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:08.058 [2024-12-14 12:35:07.739240] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:08.058 [2024-12-14 12:35:07.739426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.058 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.318 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.318 "name": "raid_bdev1", 00:09:08.318 "uuid": "b561080b-1699-4a78-8705-0322fdd4bc79", 00:09:08.318 "strip_size_kb": 64, 00:09:08.318 "state": "online", 00:09:08.318 "raid_level": "raid0", 00:09:08.318 "superblock": true, 00:09:08.318 "num_base_bdevs": 3, 00:09:08.318 "num_base_bdevs_discovered": 3, 00:09:08.318 "num_base_bdevs_operational": 3, 00:09:08.318 "base_bdevs_list": [ 00:09:08.318 { 00:09:08.318 "name": "BaseBdev1", 00:09:08.318 "uuid": "0a2d5963-4a5a-5ed0-b8ee-eb745784364e", 00:09:08.318 "is_configured": true, 00:09:08.318 "data_offset": 2048, 00:09:08.318 "data_size": 63488 00:09:08.318 }, 00:09:08.318 { 00:09:08.318 "name": "BaseBdev2", 00:09:08.318 "uuid": "a9181a01-e879-5169-a782-c778279876f9", 00:09:08.318 "is_configured": true, 00:09:08.318 "data_offset": 2048, 00:09:08.318 "data_size": 63488 00:09:08.318 }, 00:09:08.318 { 00:09:08.318 "name": "BaseBdev3", 00:09:08.318 "uuid": "b463c037-721a-53c1-bc08-1d5828d2e348", 00:09:08.318 "is_configured": true, 00:09:08.318 "data_offset": 2048, 00:09:08.318 "data_size": 63488 00:09:08.318 } 00:09:08.318 ] 00:09:08.318 }' 00:09:08.318 12:35:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.318 12:35:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.603 12:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:08.603 12:35:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:08.603 [2024-12-14 12:35:08.260907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.556 "name": "raid_bdev1", 00:09:09.556 "uuid": "b561080b-1699-4a78-8705-0322fdd4bc79", 00:09:09.556 "strip_size_kb": 64, 00:09:09.556 "state": "online", 00:09:09.556 "raid_level": "raid0", 00:09:09.556 "superblock": true, 00:09:09.556 "num_base_bdevs": 3, 00:09:09.556 "num_base_bdevs_discovered": 3, 00:09:09.556 "num_base_bdevs_operational": 3, 00:09:09.556 "base_bdevs_list": [ 00:09:09.556 { 00:09:09.556 "name": "BaseBdev1", 00:09:09.556 "uuid": "0a2d5963-4a5a-5ed0-b8ee-eb745784364e", 00:09:09.556 "is_configured": true, 00:09:09.556 "data_offset": 2048, 00:09:09.556 "data_size": 63488 00:09:09.556 }, 00:09:09.556 { 00:09:09.556 "name": "BaseBdev2", 00:09:09.556 "uuid": "a9181a01-e879-5169-a782-c778279876f9", 00:09:09.556 "is_configured": true, 00:09:09.556 "data_offset": 2048, 00:09:09.556 "data_size": 63488 00:09:09.556 }, 00:09:09.556 { 00:09:09.556 "name": "BaseBdev3", 00:09:09.556 "uuid": "b463c037-721a-53c1-bc08-1d5828d2e348", 00:09:09.556 "is_configured": true, 00:09:09.556 "data_offset": 2048, 00:09:09.556 "data_size": 63488 00:09:09.556 } 00:09:09.556 ] 00:09:09.556 }' 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.556 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.126 [2024-12-14 12:35:09.608773] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:10.126 [2024-12-14 12:35:09.608877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.126 [2024-12-14 12:35:09.612061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.126 [2024-12-14 12:35:09.612151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:10.126 [2024-12-14 12:35:09.612229] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.126 [2024-12-14 12:35:09.612278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67265 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67265 ']' 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67265 00:09:10.126 { 00:09:10.126 "results": [ 00:09:10.126 { 00:09:10.126 "job": "raid_bdev1", 00:09:10.126 "core_mask": "0x1", 00:09:10.126 "workload": "randrw", 00:09:10.126 "percentage": 50, 00:09:10.126 "status": "finished", 00:09:10.126 "queue_depth": 1, 00:09:10.126 "io_size": 131072, 00:09:10.126 "runtime": 1.348837, 00:09:10.126 "iops": 15521.51964989098, 00:09:10.126 "mibps": 1940.1899562363726, 00:09:10.126 "io_failed": 1, 00:09:10.126 "io_timeout": 0, 00:09:10.126 "avg_latency_us": 89.23099879801602, 00:09:10.126 "min_latency_us": 26.494323144104804, 00:09:10.126 "max_latency_us": 1402.2986899563318 00:09:10.126 } 00:09:10.126 ], 00:09:10.126 "core_count": 1 00:09:10.126 } 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67265 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.126 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.127 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67265' 00:09:10.127 killing process with pid 67265 00:09:10.127 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67265 00:09:10.127 12:35:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67265 00:09:10.127 [2024-12-14 12:35:09.641306] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.386 [2024-12-14 12:35:09.870060] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FRhJ6frmS4 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:11.325 00:09:11.325 real 0m4.474s 00:09:11.325 user 0m5.289s 00:09:11.325 sys 0m0.548s 00:09:11.325 ************************************ 00:09:11.325 END TEST raid_write_error_test 00:09:11.325 ************************************ 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.325 12:35:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.584 12:35:11 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:11.584 12:35:11 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:11.584 12:35:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.584 12:35:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.584 12:35:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.584 ************************************ 00:09:11.584 START TEST raid_state_function_test 00:09:11.584 ************************************ 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:11.584 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:11.585 Process raid pid: 67403 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67403 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67403' 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67403 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67403 ']' 00:09:11.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.585 12:35:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.585 [2024-12-14 12:35:11.206717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:11.585 [2024-12-14 12:35:11.206829] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.842 [2024-12-14 12:35:11.381935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.842 [2024-12-14 12:35:11.493895] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.101 [2024-12-14 12:35:11.695694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.101 [2024-12-14 12:35:11.695741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.361 [2024-12-14 12:35:12.047755] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.361 [2024-12-14 12:35:12.047810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.361 [2024-12-14 12:35:12.047821] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.361 [2024-12-14 12:35:12.047831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.361 [2024-12-14 12:35:12.047837] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.361 [2024-12-14 12:35:12.047845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.361 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.621 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.621 "name": "Existed_Raid", 00:09:12.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.621 "strip_size_kb": 64, 00:09:12.621 "state": "configuring", 00:09:12.621 "raid_level": "concat", 00:09:12.621 "superblock": false, 00:09:12.621 "num_base_bdevs": 3, 00:09:12.621 "num_base_bdevs_discovered": 0, 00:09:12.621 "num_base_bdevs_operational": 3, 00:09:12.621 "base_bdevs_list": [ 00:09:12.621 { 00:09:12.621 "name": "BaseBdev1", 00:09:12.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.621 "is_configured": false, 00:09:12.621 "data_offset": 0, 00:09:12.621 "data_size": 0 00:09:12.621 }, 00:09:12.621 { 00:09:12.621 "name": "BaseBdev2", 00:09:12.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.621 "is_configured": false, 00:09:12.621 "data_offset": 0, 00:09:12.621 "data_size": 0 00:09:12.621 }, 00:09:12.621 { 00:09:12.621 "name": "BaseBdev3", 00:09:12.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.621 "is_configured": false, 00:09:12.621 "data_offset": 0, 00:09:12.621 "data_size": 0 00:09:12.621 } 00:09:12.621 ] 00:09:12.621 }' 00:09:12.621 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.621 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.880 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:12.880 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.880 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.881 [2024-12-14 12:35:12.510895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:12.881 [2024-12-14 12:35:12.510980] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.881 [2024-12-14 12:35:12.522872] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:12.881 [2024-12-14 12:35:12.522950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:12.881 [2024-12-14 12:35:12.522980] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:12.881 [2024-12-14 12:35:12.523036] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:12.881 [2024-12-14 12:35:12.523074] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:12.881 [2024-12-14 12:35:12.523097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.881 [2024-12-14 12:35:12.567756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.881 BaseBdev1 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.881 [ 00:09:12.881 { 00:09:12.881 "name": "BaseBdev1", 00:09:12.881 "aliases": [ 00:09:12.881 "ba747a89-526e-4378-a067-2a1d99ea935e" 00:09:12.881 ], 00:09:12.881 "product_name": "Malloc disk", 00:09:12.881 "block_size": 512, 00:09:12.881 "num_blocks": 65536, 00:09:12.881 "uuid": "ba747a89-526e-4378-a067-2a1d99ea935e", 00:09:12.881 "assigned_rate_limits": { 00:09:12.881 "rw_ios_per_sec": 0, 00:09:12.881 "rw_mbytes_per_sec": 0, 00:09:12.881 "r_mbytes_per_sec": 0, 00:09:12.881 "w_mbytes_per_sec": 0 00:09:12.881 }, 00:09:12.881 "claimed": true, 00:09:12.881 "claim_type": "exclusive_write", 00:09:12.881 "zoned": false, 00:09:12.881 "supported_io_types": { 00:09:12.881 "read": true, 00:09:12.881 "write": true, 00:09:12.881 "unmap": true, 00:09:12.881 "flush": true, 00:09:12.881 "reset": true, 00:09:12.881 "nvme_admin": false, 00:09:12.881 "nvme_io": false, 00:09:12.881 "nvme_io_md": false, 00:09:12.881 "write_zeroes": true, 00:09:12.881 "zcopy": true, 00:09:12.881 "get_zone_info": false, 00:09:12.881 "zone_management": false, 00:09:12.881 "zone_append": false, 00:09:12.881 "compare": false, 00:09:12.881 "compare_and_write": false, 00:09:12.881 "abort": true, 00:09:12.881 "seek_hole": false, 00:09:12.881 "seek_data": false, 00:09:12.881 "copy": true, 00:09:12.881 "nvme_iov_md": false 00:09:12.881 }, 00:09:12.881 "memory_domains": [ 00:09:12.881 { 00:09:12.881 "dma_device_id": "system", 00:09:12.881 "dma_device_type": 1 00:09:12.881 }, 00:09:12.881 { 00:09:12.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.881 "dma_device_type": 2 00:09:12.881 } 00:09:12.881 ], 00:09:12.881 "driver_specific": {} 00:09:12.881 } 00:09:12.881 ] 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.881 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.141 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.141 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.141 "name": "Existed_Raid", 00:09:13.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.141 "strip_size_kb": 64, 00:09:13.141 "state": "configuring", 00:09:13.141 "raid_level": "concat", 00:09:13.141 "superblock": false, 00:09:13.141 "num_base_bdevs": 3, 00:09:13.141 "num_base_bdevs_discovered": 1, 00:09:13.141 "num_base_bdevs_operational": 3, 00:09:13.141 "base_bdevs_list": [ 00:09:13.141 { 00:09:13.141 "name": "BaseBdev1", 00:09:13.141 "uuid": "ba747a89-526e-4378-a067-2a1d99ea935e", 00:09:13.141 "is_configured": true, 00:09:13.141 "data_offset": 0, 00:09:13.141 "data_size": 65536 00:09:13.141 }, 00:09:13.141 { 00:09:13.141 "name": "BaseBdev2", 00:09:13.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.141 "is_configured": false, 00:09:13.141 "data_offset": 0, 00:09:13.141 "data_size": 0 00:09:13.141 }, 00:09:13.141 { 00:09:13.141 "name": "BaseBdev3", 00:09:13.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.141 "is_configured": false, 00:09:13.141 "data_offset": 0, 00:09:13.141 "data_size": 0 00:09:13.141 } 00:09:13.141 ] 00:09:13.141 }' 00:09:13.141 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.141 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.400 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:13.400 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.400 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.400 [2024-12-14 12:35:12.995126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:13.400 [2024-12-14 12:35:12.995186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:13.400 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.400 12:35:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.400 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.400 12:35:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.400 [2024-12-14 12:35:13.007142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.400 [2024-12-14 12:35:13.008916] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.400 [2024-12-14 12:35:13.008961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.400 [2024-12-14 12:35:13.008971] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.400 [2024-12-14 12:35:13.008980] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.400 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.401 "name": "Existed_Raid", 00:09:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.401 "strip_size_kb": 64, 00:09:13.401 "state": "configuring", 00:09:13.401 "raid_level": "concat", 00:09:13.401 "superblock": false, 00:09:13.401 "num_base_bdevs": 3, 00:09:13.401 "num_base_bdevs_discovered": 1, 00:09:13.401 "num_base_bdevs_operational": 3, 00:09:13.401 "base_bdevs_list": [ 00:09:13.401 { 00:09:13.401 "name": "BaseBdev1", 00:09:13.401 "uuid": "ba747a89-526e-4378-a067-2a1d99ea935e", 00:09:13.401 "is_configured": true, 00:09:13.401 "data_offset": 0, 00:09:13.401 "data_size": 65536 00:09:13.401 }, 00:09:13.401 { 00:09:13.401 "name": "BaseBdev2", 00:09:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.401 "is_configured": false, 00:09:13.401 "data_offset": 0, 00:09:13.401 "data_size": 0 00:09:13.401 }, 00:09:13.401 { 00:09:13.401 "name": "BaseBdev3", 00:09:13.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.401 "is_configured": false, 00:09:13.401 "data_offset": 0, 00:09:13.401 "data_size": 0 00:09:13.401 } 00:09:13.401 ] 00:09:13.401 }' 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.401 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.969 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.969 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.969 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.970 [2024-12-14 12:35:13.484052] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.970 BaseBdev2 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.970 [ 00:09:13.970 { 00:09:13.970 "name": "BaseBdev2", 00:09:13.970 "aliases": [ 00:09:13.970 "8b020856-6cfc-46e9-8124-0f83da9f1dfe" 00:09:13.970 ], 00:09:13.970 "product_name": "Malloc disk", 00:09:13.970 "block_size": 512, 00:09:13.970 "num_blocks": 65536, 00:09:13.970 "uuid": "8b020856-6cfc-46e9-8124-0f83da9f1dfe", 00:09:13.970 "assigned_rate_limits": { 00:09:13.970 "rw_ios_per_sec": 0, 00:09:13.970 "rw_mbytes_per_sec": 0, 00:09:13.970 "r_mbytes_per_sec": 0, 00:09:13.970 "w_mbytes_per_sec": 0 00:09:13.970 }, 00:09:13.970 "claimed": true, 00:09:13.970 "claim_type": "exclusive_write", 00:09:13.970 "zoned": false, 00:09:13.970 "supported_io_types": { 00:09:13.970 "read": true, 00:09:13.970 "write": true, 00:09:13.970 "unmap": true, 00:09:13.970 "flush": true, 00:09:13.970 "reset": true, 00:09:13.970 "nvme_admin": false, 00:09:13.970 "nvme_io": false, 00:09:13.970 "nvme_io_md": false, 00:09:13.970 "write_zeroes": true, 00:09:13.970 "zcopy": true, 00:09:13.970 "get_zone_info": false, 00:09:13.970 "zone_management": false, 00:09:13.970 "zone_append": false, 00:09:13.970 "compare": false, 00:09:13.970 "compare_and_write": false, 00:09:13.970 "abort": true, 00:09:13.970 "seek_hole": false, 00:09:13.970 "seek_data": false, 00:09:13.970 "copy": true, 00:09:13.970 "nvme_iov_md": false 00:09:13.970 }, 00:09:13.970 "memory_domains": [ 00:09:13.970 { 00:09:13.970 "dma_device_id": "system", 00:09:13.970 "dma_device_type": 1 00:09:13.970 }, 00:09:13.970 { 00:09:13.970 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.970 "dma_device_type": 2 00:09:13.970 } 00:09:13.970 ], 00:09:13.970 "driver_specific": {} 00:09:13.970 } 00:09:13.970 ] 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.970 "name": "Existed_Raid", 00:09:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.970 "strip_size_kb": 64, 00:09:13.970 "state": "configuring", 00:09:13.970 "raid_level": "concat", 00:09:13.970 "superblock": false, 00:09:13.970 "num_base_bdevs": 3, 00:09:13.970 "num_base_bdevs_discovered": 2, 00:09:13.970 "num_base_bdevs_operational": 3, 00:09:13.970 "base_bdevs_list": [ 00:09:13.970 { 00:09:13.970 "name": "BaseBdev1", 00:09:13.970 "uuid": "ba747a89-526e-4378-a067-2a1d99ea935e", 00:09:13.970 "is_configured": true, 00:09:13.970 "data_offset": 0, 00:09:13.970 "data_size": 65536 00:09:13.970 }, 00:09:13.970 { 00:09:13.970 "name": "BaseBdev2", 00:09:13.970 "uuid": "8b020856-6cfc-46e9-8124-0f83da9f1dfe", 00:09:13.970 "is_configured": true, 00:09:13.970 "data_offset": 0, 00:09:13.970 "data_size": 65536 00:09:13.970 }, 00:09:13.970 { 00:09:13.970 "name": "BaseBdev3", 00:09:13.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.970 "is_configured": false, 00:09:13.970 "data_offset": 0, 00:09:13.970 "data_size": 0 00:09:13.970 } 00:09:13.970 ] 00:09:13.970 }' 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.970 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.230 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:14.230 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.230 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.489 [2024-12-14 12:35:13.979803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:14.489 [2024-12-14 12:35:13.979943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:14.489 [2024-12-14 12:35:13.979961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:14.489 [2024-12-14 12:35:13.980265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:14.489 [2024-12-14 12:35:13.980451] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:14.489 [2024-12-14 12:35:13.980463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:14.489 [2024-12-14 12:35:13.980753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.489 BaseBdev3 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.489 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.490 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.490 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:14.490 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.490 12:35:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.490 [ 00:09:14.490 { 00:09:14.490 "name": "BaseBdev3", 00:09:14.490 "aliases": [ 00:09:14.490 "3d895508-17c1-4754-8e1e-3695d87705b7" 00:09:14.490 ], 00:09:14.490 "product_name": "Malloc disk", 00:09:14.490 "block_size": 512, 00:09:14.490 "num_blocks": 65536, 00:09:14.490 "uuid": "3d895508-17c1-4754-8e1e-3695d87705b7", 00:09:14.490 "assigned_rate_limits": { 00:09:14.490 "rw_ios_per_sec": 0, 00:09:14.490 "rw_mbytes_per_sec": 0, 00:09:14.490 "r_mbytes_per_sec": 0, 00:09:14.490 "w_mbytes_per_sec": 0 00:09:14.490 }, 00:09:14.490 "claimed": true, 00:09:14.490 "claim_type": "exclusive_write", 00:09:14.490 "zoned": false, 00:09:14.490 "supported_io_types": { 00:09:14.490 "read": true, 00:09:14.490 "write": true, 00:09:14.490 "unmap": true, 00:09:14.490 "flush": true, 00:09:14.490 "reset": true, 00:09:14.490 "nvme_admin": false, 00:09:14.490 "nvme_io": false, 00:09:14.490 "nvme_io_md": false, 00:09:14.490 "write_zeroes": true, 00:09:14.490 "zcopy": true, 00:09:14.490 "get_zone_info": false, 00:09:14.490 "zone_management": false, 00:09:14.490 "zone_append": false, 00:09:14.490 "compare": false, 00:09:14.490 "compare_and_write": false, 00:09:14.490 "abort": true, 00:09:14.490 "seek_hole": false, 00:09:14.490 "seek_data": false, 00:09:14.490 "copy": true, 00:09:14.490 "nvme_iov_md": false 00:09:14.490 }, 00:09:14.490 "memory_domains": [ 00:09:14.490 { 00:09:14.490 "dma_device_id": "system", 00:09:14.490 "dma_device_type": 1 00:09:14.490 }, 00:09:14.490 { 00:09:14.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.490 "dma_device_type": 2 00:09:14.490 } 00:09:14.490 ], 00:09:14.490 "driver_specific": {} 00:09:14.490 } 00:09:14.490 ] 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.490 "name": "Existed_Raid", 00:09:14.490 "uuid": "e3125207-82ab-48d3-92b8-c9c2a13c5515", 00:09:14.490 "strip_size_kb": 64, 00:09:14.490 "state": "online", 00:09:14.490 "raid_level": "concat", 00:09:14.490 "superblock": false, 00:09:14.490 "num_base_bdevs": 3, 00:09:14.490 "num_base_bdevs_discovered": 3, 00:09:14.490 "num_base_bdevs_operational": 3, 00:09:14.490 "base_bdevs_list": [ 00:09:14.490 { 00:09:14.490 "name": "BaseBdev1", 00:09:14.490 "uuid": "ba747a89-526e-4378-a067-2a1d99ea935e", 00:09:14.490 "is_configured": true, 00:09:14.490 "data_offset": 0, 00:09:14.490 "data_size": 65536 00:09:14.490 }, 00:09:14.490 { 00:09:14.490 "name": "BaseBdev2", 00:09:14.490 "uuid": "8b020856-6cfc-46e9-8124-0f83da9f1dfe", 00:09:14.490 "is_configured": true, 00:09:14.490 "data_offset": 0, 00:09:14.490 "data_size": 65536 00:09:14.490 }, 00:09:14.490 { 00:09:14.490 "name": "BaseBdev3", 00:09:14.490 "uuid": "3d895508-17c1-4754-8e1e-3695d87705b7", 00:09:14.490 "is_configured": true, 00:09:14.490 "data_offset": 0, 00:09:14.490 "data_size": 65536 00:09:14.490 } 00:09:14.490 ] 00:09:14.490 }' 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.490 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.749 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.749 [2024-12-14 12:35:14.475380] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.009 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.009 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.009 "name": "Existed_Raid", 00:09:15.009 "aliases": [ 00:09:15.009 "e3125207-82ab-48d3-92b8-c9c2a13c5515" 00:09:15.009 ], 00:09:15.009 "product_name": "Raid Volume", 00:09:15.009 "block_size": 512, 00:09:15.009 "num_blocks": 196608, 00:09:15.009 "uuid": "e3125207-82ab-48d3-92b8-c9c2a13c5515", 00:09:15.009 "assigned_rate_limits": { 00:09:15.009 "rw_ios_per_sec": 0, 00:09:15.009 "rw_mbytes_per_sec": 0, 00:09:15.009 "r_mbytes_per_sec": 0, 00:09:15.009 "w_mbytes_per_sec": 0 00:09:15.009 }, 00:09:15.009 "claimed": false, 00:09:15.009 "zoned": false, 00:09:15.009 "supported_io_types": { 00:09:15.009 "read": true, 00:09:15.009 "write": true, 00:09:15.009 "unmap": true, 00:09:15.009 "flush": true, 00:09:15.009 "reset": true, 00:09:15.009 "nvme_admin": false, 00:09:15.009 "nvme_io": false, 00:09:15.009 "nvme_io_md": false, 00:09:15.009 "write_zeroes": true, 00:09:15.009 "zcopy": false, 00:09:15.009 "get_zone_info": false, 00:09:15.009 "zone_management": false, 00:09:15.009 "zone_append": false, 00:09:15.009 "compare": false, 00:09:15.009 "compare_and_write": false, 00:09:15.009 "abort": false, 00:09:15.009 "seek_hole": false, 00:09:15.009 "seek_data": false, 00:09:15.009 "copy": false, 00:09:15.009 "nvme_iov_md": false 00:09:15.009 }, 00:09:15.009 "memory_domains": [ 00:09:15.009 { 00:09:15.009 "dma_device_id": "system", 00:09:15.009 "dma_device_type": 1 00:09:15.009 }, 00:09:15.009 { 00:09:15.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.009 "dma_device_type": 2 00:09:15.009 }, 00:09:15.009 { 00:09:15.009 "dma_device_id": "system", 00:09:15.009 "dma_device_type": 1 00:09:15.009 }, 00:09:15.009 { 00:09:15.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.009 "dma_device_type": 2 00:09:15.009 }, 00:09:15.009 { 00:09:15.009 "dma_device_id": "system", 00:09:15.009 "dma_device_type": 1 00:09:15.009 }, 00:09:15.009 { 00:09:15.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.010 "dma_device_type": 2 00:09:15.010 } 00:09:15.010 ], 00:09:15.010 "driver_specific": { 00:09:15.010 "raid": { 00:09:15.010 "uuid": "e3125207-82ab-48d3-92b8-c9c2a13c5515", 00:09:15.010 "strip_size_kb": 64, 00:09:15.010 "state": "online", 00:09:15.010 "raid_level": "concat", 00:09:15.010 "superblock": false, 00:09:15.010 "num_base_bdevs": 3, 00:09:15.010 "num_base_bdevs_discovered": 3, 00:09:15.010 "num_base_bdevs_operational": 3, 00:09:15.010 "base_bdevs_list": [ 00:09:15.010 { 00:09:15.010 "name": "BaseBdev1", 00:09:15.010 "uuid": "ba747a89-526e-4378-a067-2a1d99ea935e", 00:09:15.010 "is_configured": true, 00:09:15.010 "data_offset": 0, 00:09:15.010 "data_size": 65536 00:09:15.010 }, 00:09:15.010 { 00:09:15.010 "name": "BaseBdev2", 00:09:15.010 "uuid": "8b020856-6cfc-46e9-8124-0f83da9f1dfe", 00:09:15.010 "is_configured": true, 00:09:15.010 "data_offset": 0, 00:09:15.010 "data_size": 65536 00:09:15.010 }, 00:09:15.010 { 00:09:15.010 "name": "BaseBdev3", 00:09:15.010 "uuid": "3d895508-17c1-4754-8e1e-3695d87705b7", 00:09:15.010 "is_configured": true, 00:09:15.010 "data_offset": 0, 00:09:15.010 "data_size": 65536 00:09:15.010 } 00:09:15.010 ] 00:09:15.010 } 00:09:15.010 } 00:09:15.010 }' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:15.010 BaseBdev2 00:09:15.010 BaseBdev3' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.010 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.010 [2024-12-14 12:35:14.714635] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.010 [2024-12-14 12:35:14.714664] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.010 [2024-12-14 12:35:14.714715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.270 "name": "Existed_Raid", 00:09:15.270 "uuid": "e3125207-82ab-48d3-92b8-c9c2a13c5515", 00:09:15.270 "strip_size_kb": 64, 00:09:15.270 "state": "offline", 00:09:15.270 "raid_level": "concat", 00:09:15.270 "superblock": false, 00:09:15.270 "num_base_bdevs": 3, 00:09:15.270 "num_base_bdevs_discovered": 2, 00:09:15.270 "num_base_bdevs_operational": 2, 00:09:15.270 "base_bdevs_list": [ 00:09:15.270 { 00:09:15.270 "name": null, 00:09:15.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.270 "is_configured": false, 00:09:15.270 "data_offset": 0, 00:09:15.270 "data_size": 65536 00:09:15.270 }, 00:09:15.270 { 00:09:15.270 "name": "BaseBdev2", 00:09:15.270 "uuid": "8b020856-6cfc-46e9-8124-0f83da9f1dfe", 00:09:15.270 "is_configured": true, 00:09:15.270 "data_offset": 0, 00:09:15.270 "data_size": 65536 00:09:15.270 }, 00:09:15.270 { 00:09:15.270 "name": "BaseBdev3", 00:09:15.270 "uuid": "3d895508-17c1-4754-8e1e-3695d87705b7", 00:09:15.270 "is_configured": true, 00:09:15.270 "data_offset": 0, 00:09:15.270 "data_size": 65536 00:09:15.270 } 00:09:15.270 ] 00:09:15.270 }' 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.270 12:35:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.529 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:15.529 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.529 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.529 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.529 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.529 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.529 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.788 [2024-12-14 12:35:15.295137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.788 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.788 [2024-12-14 12:35:15.451024] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:15.788 [2024-12-14 12:35:15.451087] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:16.047 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.047 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:16.047 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:16.047 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 BaseBdev2 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 [ 00:09:16.048 { 00:09:16.048 "name": "BaseBdev2", 00:09:16.048 "aliases": [ 00:09:16.048 "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7" 00:09:16.048 ], 00:09:16.048 "product_name": "Malloc disk", 00:09:16.048 "block_size": 512, 00:09:16.048 "num_blocks": 65536, 00:09:16.048 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:16.048 "assigned_rate_limits": { 00:09:16.048 "rw_ios_per_sec": 0, 00:09:16.048 "rw_mbytes_per_sec": 0, 00:09:16.048 "r_mbytes_per_sec": 0, 00:09:16.048 "w_mbytes_per_sec": 0 00:09:16.048 }, 00:09:16.048 "claimed": false, 00:09:16.048 "zoned": false, 00:09:16.048 "supported_io_types": { 00:09:16.048 "read": true, 00:09:16.048 "write": true, 00:09:16.048 "unmap": true, 00:09:16.048 "flush": true, 00:09:16.048 "reset": true, 00:09:16.048 "nvme_admin": false, 00:09:16.048 "nvme_io": false, 00:09:16.048 "nvme_io_md": false, 00:09:16.048 "write_zeroes": true, 00:09:16.048 "zcopy": true, 00:09:16.048 "get_zone_info": false, 00:09:16.048 "zone_management": false, 00:09:16.048 "zone_append": false, 00:09:16.048 "compare": false, 00:09:16.048 "compare_and_write": false, 00:09:16.048 "abort": true, 00:09:16.048 "seek_hole": false, 00:09:16.048 "seek_data": false, 00:09:16.048 "copy": true, 00:09:16.048 "nvme_iov_md": false 00:09:16.048 }, 00:09:16.048 "memory_domains": [ 00:09:16.048 { 00:09:16.048 "dma_device_id": "system", 00:09:16.048 "dma_device_type": 1 00:09:16.048 }, 00:09:16.048 { 00:09:16.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.048 "dma_device_type": 2 00:09:16.048 } 00:09:16.048 ], 00:09:16.048 "driver_specific": {} 00:09:16.048 } 00:09:16.048 ] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 BaseBdev3 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 [ 00:09:16.048 { 00:09:16.048 "name": "BaseBdev3", 00:09:16.048 "aliases": [ 00:09:16.048 "787b7953-06a8-4a0f-9c6e-42e9e03227a4" 00:09:16.048 ], 00:09:16.048 "product_name": "Malloc disk", 00:09:16.048 "block_size": 512, 00:09:16.048 "num_blocks": 65536, 00:09:16.048 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:16.048 "assigned_rate_limits": { 00:09:16.048 "rw_ios_per_sec": 0, 00:09:16.048 "rw_mbytes_per_sec": 0, 00:09:16.048 "r_mbytes_per_sec": 0, 00:09:16.048 "w_mbytes_per_sec": 0 00:09:16.048 }, 00:09:16.048 "claimed": false, 00:09:16.048 "zoned": false, 00:09:16.048 "supported_io_types": { 00:09:16.048 "read": true, 00:09:16.048 "write": true, 00:09:16.048 "unmap": true, 00:09:16.048 "flush": true, 00:09:16.048 "reset": true, 00:09:16.048 "nvme_admin": false, 00:09:16.048 "nvme_io": false, 00:09:16.048 "nvme_io_md": false, 00:09:16.048 "write_zeroes": true, 00:09:16.048 "zcopy": true, 00:09:16.048 "get_zone_info": false, 00:09:16.048 "zone_management": false, 00:09:16.048 "zone_append": false, 00:09:16.048 "compare": false, 00:09:16.048 "compare_and_write": false, 00:09:16.048 "abort": true, 00:09:16.048 "seek_hole": false, 00:09:16.048 "seek_data": false, 00:09:16.048 "copy": true, 00:09:16.048 "nvme_iov_md": false 00:09:16.048 }, 00:09:16.048 "memory_domains": [ 00:09:16.048 { 00:09:16.048 "dma_device_id": "system", 00:09:16.048 "dma_device_type": 1 00:09:16.048 }, 00:09:16.048 { 00:09:16.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.048 "dma_device_type": 2 00:09:16.048 } 00:09:16.048 ], 00:09:16.048 "driver_specific": {} 00:09:16.048 } 00:09:16.048 ] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.048 [2024-12-14 12:35:15.762660] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:16.048 [2024-12-14 12:35:15.762740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:16.048 [2024-12-14 12:35:15.762781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.048 [2024-12-14 12:35:15.764526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.048 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.049 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.049 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.049 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.308 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.308 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.308 "name": "Existed_Raid", 00:09:16.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.308 "strip_size_kb": 64, 00:09:16.308 "state": "configuring", 00:09:16.308 "raid_level": "concat", 00:09:16.308 "superblock": false, 00:09:16.308 "num_base_bdevs": 3, 00:09:16.308 "num_base_bdevs_discovered": 2, 00:09:16.308 "num_base_bdevs_operational": 3, 00:09:16.308 "base_bdevs_list": [ 00:09:16.308 { 00:09:16.308 "name": "BaseBdev1", 00:09:16.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.308 "is_configured": false, 00:09:16.308 "data_offset": 0, 00:09:16.308 "data_size": 0 00:09:16.308 }, 00:09:16.308 { 00:09:16.308 "name": "BaseBdev2", 00:09:16.308 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:16.308 "is_configured": true, 00:09:16.308 "data_offset": 0, 00:09:16.308 "data_size": 65536 00:09:16.308 }, 00:09:16.308 { 00:09:16.308 "name": "BaseBdev3", 00:09:16.308 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:16.308 "is_configured": true, 00:09:16.308 "data_offset": 0, 00:09:16.308 "data_size": 65536 00:09:16.308 } 00:09:16.308 ] 00:09:16.308 }' 00:09:16.308 12:35:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.308 12:35:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.568 [2024-12-14 12:35:16.229928] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.568 "name": "Existed_Raid", 00:09:16.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.568 "strip_size_kb": 64, 00:09:16.568 "state": "configuring", 00:09:16.568 "raid_level": "concat", 00:09:16.568 "superblock": false, 00:09:16.568 "num_base_bdevs": 3, 00:09:16.568 "num_base_bdevs_discovered": 1, 00:09:16.568 "num_base_bdevs_operational": 3, 00:09:16.568 "base_bdevs_list": [ 00:09:16.568 { 00:09:16.568 "name": "BaseBdev1", 00:09:16.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.568 "is_configured": false, 00:09:16.568 "data_offset": 0, 00:09:16.568 "data_size": 0 00:09:16.568 }, 00:09:16.568 { 00:09:16.568 "name": null, 00:09:16.568 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:16.568 "is_configured": false, 00:09:16.568 "data_offset": 0, 00:09:16.568 "data_size": 65536 00:09:16.568 }, 00:09:16.568 { 00:09:16.568 "name": "BaseBdev3", 00:09:16.568 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:16.568 "is_configured": true, 00:09:16.568 "data_offset": 0, 00:09:16.568 "data_size": 65536 00:09:16.568 } 00:09:16.568 ] 00:09:16.568 }' 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.568 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.137 [2024-12-14 12:35:16.682249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.137 BaseBdev1 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:17.137 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.138 [ 00:09:17.138 { 00:09:17.138 "name": "BaseBdev1", 00:09:17.138 "aliases": [ 00:09:17.138 "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f" 00:09:17.138 ], 00:09:17.138 "product_name": "Malloc disk", 00:09:17.138 "block_size": 512, 00:09:17.138 "num_blocks": 65536, 00:09:17.138 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:17.138 "assigned_rate_limits": { 00:09:17.138 "rw_ios_per_sec": 0, 00:09:17.138 "rw_mbytes_per_sec": 0, 00:09:17.138 "r_mbytes_per_sec": 0, 00:09:17.138 "w_mbytes_per_sec": 0 00:09:17.138 }, 00:09:17.138 "claimed": true, 00:09:17.138 "claim_type": "exclusive_write", 00:09:17.138 "zoned": false, 00:09:17.138 "supported_io_types": { 00:09:17.138 "read": true, 00:09:17.138 "write": true, 00:09:17.138 "unmap": true, 00:09:17.138 "flush": true, 00:09:17.138 "reset": true, 00:09:17.138 "nvme_admin": false, 00:09:17.138 "nvme_io": false, 00:09:17.138 "nvme_io_md": false, 00:09:17.138 "write_zeroes": true, 00:09:17.138 "zcopy": true, 00:09:17.138 "get_zone_info": false, 00:09:17.138 "zone_management": false, 00:09:17.138 "zone_append": false, 00:09:17.138 "compare": false, 00:09:17.138 "compare_and_write": false, 00:09:17.138 "abort": true, 00:09:17.138 "seek_hole": false, 00:09:17.138 "seek_data": false, 00:09:17.138 "copy": true, 00:09:17.138 "nvme_iov_md": false 00:09:17.138 }, 00:09:17.138 "memory_domains": [ 00:09:17.138 { 00:09:17.138 "dma_device_id": "system", 00:09:17.138 "dma_device_type": 1 00:09:17.138 }, 00:09:17.138 { 00:09:17.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.138 "dma_device_type": 2 00:09:17.138 } 00:09:17.138 ], 00:09:17.138 "driver_specific": {} 00:09:17.138 } 00:09:17.138 ] 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.138 "name": "Existed_Raid", 00:09:17.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.138 "strip_size_kb": 64, 00:09:17.138 "state": "configuring", 00:09:17.138 "raid_level": "concat", 00:09:17.138 "superblock": false, 00:09:17.138 "num_base_bdevs": 3, 00:09:17.138 "num_base_bdevs_discovered": 2, 00:09:17.138 "num_base_bdevs_operational": 3, 00:09:17.138 "base_bdevs_list": [ 00:09:17.138 { 00:09:17.138 "name": "BaseBdev1", 00:09:17.138 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:17.138 "is_configured": true, 00:09:17.138 "data_offset": 0, 00:09:17.138 "data_size": 65536 00:09:17.138 }, 00:09:17.138 { 00:09:17.138 "name": null, 00:09:17.138 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:17.138 "is_configured": false, 00:09:17.138 "data_offset": 0, 00:09:17.138 "data_size": 65536 00:09:17.138 }, 00:09:17.138 { 00:09:17.138 "name": "BaseBdev3", 00:09:17.138 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:17.138 "is_configured": true, 00:09:17.138 "data_offset": 0, 00:09:17.138 "data_size": 65536 00:09:17.138 } 00:09:17.138 ] 00:09:17.138 }' 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.138 12:35:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.707 [2024-12-14 12:35:17.233425] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.707 "name": "Existed_Raid", 00:09:17.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.707 "strip_size_kb": 64, 00:09:17.707 "state": "configuring", 00:09:17.707 "raid_level": "concat", 00:09:17.707 "superblock": false, 00:09:17.707 "num_base_bdevs": 3, 00:09:17.707 "num_base_bdevs_discovered": 1, 00:09:17.707 "num_base_bdevs_operational": 3, 00:09:17.707 "base_bdevs_list": [ 00:09:17.707 { 00:09:17.707 "name": "BaseBdev1", 00:09:17.707 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:17.707 "is_configured": true, 00:09:17.707 "data_offset": 0, 00:09:17.707 "data_size": 65536 00:09:17.707 }, 00:09:17.707 { 00:09:17.707 "name": null, 00:09:17.707 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:17.707 "is_configured": false, 00:09:17.707 "data_offset": 0, 00:09:17.707 "data_size": 65536 00:09:17.707 }, 00:09:17.707 { 00:09:17.707 "name": null, 00:09:17.707 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:17.707 "is_configured": false, 00:09:17.707 "data_offset": 0, 00:09:17.707 "data_size": 65536 00:09:17.707 } 00:09:17.707 ] 00:09:17.707 }' 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.707 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.967 [2024-12-14 12:35:17.644711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.967 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.227 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.227 "name": "Existed_Raid", 00:09:18.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.227 "strip_size_kb": 64, 00:09:18.227 "state": "configuring", 00:09:18.227 "raid_level": "concat", 00:09:18.227 "superblock": false, 00:09:18.227 "num_base_bdevs": 3, 00:09:18.227 "num_base_bdevs_discovered": 2, 00:09:18.227 "num_base_bdevs_operational": 3, 00:09:18.227 "base_bdevs_list": [ 00:09:18.227 { 00:09:18.227 "name": "BaseBdev1", 00:09:18.227 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:18.227 "is_configured": true, 00:09:18.227 "data_offset": 0, 00:09:18.227 "data_size": 65536 00:09:18.227 }, 00:09:18.227 { 00:09:18.227 "name": null, 00:09:18.227 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:18.227 "is_configured": false, 00:09:18.227 "data_offset": 0, 00:09:18.227 "data_size": 65536 00:09:18.227 }, 00:09:18.227 { 00:09:18.227 "name": "BaseBdev3", 00:09:18.227 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:18.227 "is_configured": true, 00:09:18.227 "data_offset": 0, 00:09:18.227 "data_size": 65536 00:09:18.227 } 00:09:18.227 ] 00:09:18.227 }' 00:09:18.227 12:35:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.227 12:35:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.486 [2024-12-14 12:35:18.123940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.486 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.745 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.745 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.745 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.745 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.745 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.745 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.745 "name": "Existed_Raid", 00:09:18.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.745 "strip_size_kb": 64, 00:09:18.745 "state": "configuring", 00:09:18.745 "raid_level": "concat", 00:09:18.745 "superblock": false, 00:09:18.745 "num_base_bdevs": 3, 00:09:18.745 "num_base_bdevs_discovered": 1, 00:09:18.745 "num_base_bdevs_operational": 3, 00:09:18.745 "base_bdevs_list": [ 00:09:18.745 { 00:09:18.745 "name": null, 00:09:18.745 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:18.745 "is_configured": false, 00:09:18.745 "data_offset": 0, 00:09:18.745 "data_size": 65536 00:09:18.745 }, 00:09:18.745 { 00:09:18.745 "name": null, 00:09:18.745 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:18.745 "is_configured": false, 00:09:18.745 "data_offset": 0, 00:09:18.745 "data_size": 65536 00:09:18.745 }, 00:09:18.745 { 00:09:18.745 "name": "BaseBdev3", 00:09:18.745 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:18.745 "is_configured": true, 00:09:18.745 "data_offset": 0, 00:09:18.745 "data_size": 65536 00:09:18.745 } 00:09:18.745 ] 00:09:18.745 }' 00:09:18.745 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.745 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.005 [2024-12-14 12:35:18.623969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.005 "name": "Existed_Raid", 00:09:19.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.005 "strip_size_kb": 64, 00:09:19.005 "state": "configuring", 00:09:19.005 "raid_level": "concat", 00:09:19.005 "superblock": false, 00:09:19.005 "num_base_bdevs": 3, 00:09:19.005 "num_base_bdevs_discovered": 2, 00:09:19.005 "num_base_bdevs_operational": 3, 00:09:19.005 "base_bdevs_list": [ 00:09:19.005 { 00:09:19.005 "name": null, 00:09:19.005 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:19.005 "is_configured": false, 00:09:19.005 "data_offset": 0, 00:09:19.005 "data_size": 65536 00:09:19.005 }, 00:09:19.005 { 00:09:19.005 "name": "BaseBdev2", 00:09:19.005 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:19.005 "is_configured": true, 00:09:19.005 "data_offset": 0, 00:09:19.005 "data_size": 65536 00:09:19.005 }, 00:09:19.005 { 00:09:19.005 "name": "BaseBdev3", 00:09:19.005 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:19.005 "is_configured": true, 00:09:19.005 "data_offset": 0, 00:09:19.005 "data_size": 65536 00:09:19.005 } 00:09:19.005 ] 00:09:19.005 }' 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.005 12:35:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 678cde12-234a-4b4d-b5d6-fc4b8dd9db7f 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.577 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.577 [2024-12-14 12:35:19.175858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:19.577 [2024-12-14 12:35:19.175900] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:19.577 [2024-12-14 12:35:19.175909] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:19.577 [2024-12-14 12:35:19.176175] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:19.577 [2024-12-14 12:35:19.176336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:19.577 [2024-12-14 12:35:19.176346] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:19.577 [2024-12-14 12:35:19.176604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.577 NewBaseBdev 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.578 [ 00:09:19.578 { 00:09:19.578 "name": "NewBaseBdev", 00:09:19.578 "aliases": [ 00:09:19.578 "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f" 00:09:19.578 ], 00:09:19.578 "product_name": "Malloc disk", 00:09:19.578 "block_size": 512, 00:09:19.578 "num_blocks": 65536, 00:09:19.578 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:19.578 "assigned_rate_limits": { 00:09:19.578 "rw_ios_per_sec": 0, 00:09:19.578 "rw_mbytes_per_sec": 0, 00:09:19.578 "r_mbytes_per_sec": 0, 00:09:19.578 "w_mbytes_per_sec": 0 00:09:19.578 }, 00:09:19.578 "claimed": true, 00:09:19.578 "claim_type": "exclusive_write", 00:09:19.578 "zoned": false, 00:09:19.578 "supported_io_types": { 00:09:19.578 "read": true, 00:09:19.578 "write": true, 00:09:19.578 "unmap": true, 00:09:19.578 "flush": true, 00:09:19.578 "reset": true, 00:09:19.578 "nvme_admin": false, 00:09:19.578 "nvme_io": false, 00:09:19.578 "nvme_io_md": false, 00:09:19.578 "write_zeroes": true, 00:09:19.578 "zcopy": true, 00:09:19.578 "get_zone_info": false, 00:09:19.578 "zone_management": false, 00:09:19.578 "zone_append": false, 00:09:19.578 "compare": false, 00:09:19.578 "compare_and_write": false, 00:09:19.578 "abort": true, 00:09:19.578 "seek_hole": false, 00:09:19.578 "seek_data": false, 00:09:19.578 "copy": true, 00:09:19.578 "nvme_iov_md": false 00:09:19.578 }, 00:09:19.578 "memory_domains": [ 00:09:19.578 { 00:09:19.578 "dma_device_id": "system", 00:09:19.578 "dma_device_type": 1 00:09:19.578 }, 00:09:19.578 { 00:09:19.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.578 "dma_device_type": 2 00:09:19.578 } 00:09:19.578 ], 00:09:19.578 "driver_specific": {} 00:09:19.578 } 00:09:19.578 ] 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.578 "name": "Existed_Raid", 00:09:19.578 "uuid": "9a952bf1-a7a7-4eca-ba4a-0b1ed2286e23", 00:09:19.578 "strip_size_kb": 64, 00:09:19.578 "state": "online", 00:09:19.578 "raid_level": "concat", 00:09:19.578 "superblock": false, 00:09:19.578 "num_base_bdevs": 3, 00:09:19.578 "num_base_bdevs_discovered": 3, 00:09:19.578 "num_base_bdevs_operational": 3, 00:09:19.578 "base_bdevs_list": [ 00:09:19.578 { 00:09:19.578 "name": "NewBaseBdev", 00:09:19.578 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:19.578 "is_configured": true, 00:09:19.578 "data_offset": 0, 00:09:19.578 "data_size": 65536 00:09:19.578 }, 00:09:19.578 { 00:09:19.578 "name": "BaseBdev2", 00:09:19.578 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:19.578 "is_configured": true, 00:09:19.578 "data_offset": 0, 00:09:19.578 "data_size": 65536 00:09:19.578 }, 00:09:19.578 { 00:09:19.578 "name": "BaseBdev3", 00:09:19.578 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:19.578 "is_configured": true, 00:09:19.578 "data_offset": 0, 00:09:19.578 "data_size": 65536 00:09:19.578 } 00:09:19.578 ] 00:09:19.578 }' 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.578 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.155 [2024-12-14 12:35:19.667399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.155 "name": "Existed_Raid", 00:09:20.155 "aliases": [ 00:09:20.155 "9a952bf1-a7a7-4eca-ba4a-0b1ed2286e23" 00:09:20.155 ], 00:09:20.155 "product_name": "Raid Volume", 00:09:20.155 "block_size": 512, 00:09:20.155 "num_blocks": 196608, 00:09:20.155 "uuid": "9a952bf1-a7a7-4eca-ba4a-0b1ed2286e23", 00:09:20.155 "assigned_rate_limits": { 00:09:20.155 "rw_ios_per_sec": 0, 00:09:20.155 "rw_mbytes_per_sec": 0, 00:09:20.155 "r_mbytes_per_sec": 0, 00:09:20.155 "w_mbytes_per_sec": 0 00:09:20.155 }, 00:09:20.155 "claimed": false, 00:09:20.155 "zoned": false, 00:09:20.155 "supported_io_types": { 00:09:20.155 "read": true, 00:09:20.155 "write": true, 00:09:20.155 "unmap": true, 00:09:20.155 "flush": true, 00:09:20.155 "reset": true, 00:09:20.155 "nvme_admin": false, 00:09:20.155 "nvme_io": false, 00:09:20.155 "nvme_io_md": false, 00:09:20.155 "write_zeroes": true, 00:09:20.155 "zcopy": false, 00:09:20.155 "get_zone_info": false, 00:09:20.155 "zone_management": false, 00:09:20.155 "zone_append": false, 00:09:20.155 "compare": false, 00:09:20.155 "compare_and_write": false, 00:09:20.155 "abort": false, 00:09:20.155 "seek_hole": false, 00:09:20.155 "seek_data": false, 00:09:20.155 "copy": false, 00:09:20.155 "nvme_iov_md": false 00:09:20.155 }, 00:09:20.155 "memory_domains": [ 00:09:20.155 { 00:09:20.155 "dma_device_id": "system", 00:09:20.155 "dma_device_type": 1 00:09:20.155 }, 00:09:20.155 { 00:09:20.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.155 "dma_device_type": 2 00:09:20.155 }, 00:09:20.155 { 00:09:20.155 "dma_device_id": "system", 00:09:20.155 "dma_device_type": 1 00:09:20.155 }, 00:09:20.155 { 00:09:20.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.155 "dma_device_type": 2 00:09:20.155 }, 00:09:20.155 { 00:09:20.155 "dma_device_id": "system", 00:09:20.155 "dma_device_type": 1 00:09:20.155 }, 00:09:20.155 { 00:09:20.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.155 "dma_device_type": 2 00:09:20.155 } 00:09:20.155 ], 00:09:20.155 "driver_specific": { 00:09:20.155 "raid": { 00:09:20.155 "uuid": "9a952bf1-a7a7-4eca-ba4a-0b1ed2286e23", 00:09:20.155 "strip_size_kb": 64, 00:09:20.155 "state": "online", 00:09:20.155 "raid_level": "concat", 00:09:20.155 "superblock": false, 00:09:20.155 "num_base_bdevs": 3, 00:09:20.155 "num_base_bdevs_discovered": 3, 00:09:20.155 "num_base_bdevs_operational": 3, 00:09:20.155 "base_bdevs_list": [ 00:09:20.155 { 00:09:20.155 "name": "NewBaseBdev", 00:09:20.155 "uuid": "678cde12-234a-4b4d-b5d6-fc4b8dd9db7f", 00:09:20.155 "is_configured": true, 00:09:20.155 "data_offset": 0, 00:09:20.155 "data_size": 65536 00:09:20.155 }, 00:09:20.155 { 00:09:20.155 "name": "BaseBdev2", 00:09:20.155 "uuid": "e88f5ca8-0ddb-4679-b7ff-a35bc80d74f7", 00:09:20.155 "is_configured": true, 00:09:20.155 "data_offset": 0, 00:09:20.155 "data_size": 65536 00:09:20.155 }, 00:09:20.155 { 00:09:20.155 "name": "BaseBdev3", 00:09:20.155 "uuid": "787b7953-06a8-4a0f-9c6e-42e9e03227a4", 00:09:20.155 "is_configured": true, 00:09:20.155 "data_offset": 0, 00:09:20.155 "data_size": 65536 00:09:20.155 } 00:09:20.155 ] 00:09:20.155 } 00:09:20.155 } 00:09:20.155 }' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:20.155 BaseBdev2 00:09:20.155 BaseBdev3' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.155 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.416 [2024-12-14 12:35:19.950608] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.416 [2024-12-14 12:35:19.950687] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.416 [2024-12-14 12:35:19.950786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.416 [2024-12-14 12:35:19.950845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:20.416 [2024-12-14 12:35:19.950857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67403 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67403 ']' 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67403 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67403 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.416 killing process with pid 67403 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67403' 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67403 00:09:20.416 [2024-12-14 12:35:19.987885] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.416 12:35:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67403 00:09:20.676 [2024-12-14 12:35:20.283141] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:22.057 00:09:22.057 real 0m10.288s 00:09:22.057 user 0m16.361s 00:09:22.057 sys 0m1.753s 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.057 ************************************ 00:09:22.057 END TEST raid_state_function_test 00:09:22.057 ************************************ 00:09:22.057 12:35:21 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:22.057 12:35:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:22.057 12:35:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:22.057 12:35:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:22.057 ************************************ 00:09:22.057 START TEST raid_state_function_test_sb 00:09:22.057 ************************************ 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:22.057 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68024 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68024' 00:09:22.058 Process raid pid: 68024 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68024 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68024 ']' 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:22.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:22.058 12:35:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.058 [2024-12-14 12:35:21.555812] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:22.058 [2024-12-14 12:35:21.555933] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:22.058 [2024-12-14 12:35:21.723956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.318 [2024-12-14 12:35:21.835389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.318 [2024-12-14 12:35:22.037655] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.318 [2024-12-14 12:35:22.037690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.888 [2024-12-14 12:35:22.387159] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.888 [2024-12-14 12:35:22.387210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.888 [2024-12-14 12:35:22.387220] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.888 [2024-12-14 12:35:22.387246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.888 [2024-12-14 12:35:22.387253] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.888 [2024-12-14 12:35:22.387262] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.888 "name": "Existed_Raid", 00:09:22.888 "uuid": "114a654b-a1d6-4e05-af74-9df0954fc4b1", 00:09:22.888 "strip_size_kb": 64, 00:09:22.888 "state": "configuring", 00:09:22.888 "raid_level": "concat", 00:09:22.888 "superblock": true, 00:09:22.888 "num_base_bdevs": 3, 00:09:22.888 "num_base_bdevs_discovered": 0, 00:09:22.888 "num_base_bdevs_operational": 3, 00:09:22.888 "base_bdevs_list": [ 00:09:22.888 { 00:09:22.888 "name": "BaseBdev1", 00:09:22.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.888 "is_configured": false, 00:09:22.888 "data_offset": 0, 00:09:22.888 "data_size": 0 00:09:22.888 }, 00:09:22.888 { 00:09:22.888 "name": "BaseBdev2", 00:09:22.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.888 "is_configured": false, 00:09:22.888 "data_offset": 0, 00:09:22.888 "data_size": 0 00:09:22.888 }, 00:09:22.888 { 00:09:22.888 "name": "BaseBdev3", 00:09:22.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.888 "is_configured": false, 00:09:22.888 "data_offset": 0, 00:09:22.888 "data_size": 0 00:09:22.888 } 00:09:22.888 ] 00:09:22.888 }' 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.888 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.148 [2024-12-14 12:35:22.850320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.148 [2024-12-14 12:35:22.850416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.148 [2024-12-14 12:35:22.858330] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.148 [2024-12-14 12:35:22.858418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.148 [2024-12-14 12:35:22.858471] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.148 [2024-12-14 12:35:22.858498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.148 [2024-12-14 12:35:22.858530] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.148 [2024-12-14 12:35:22.858556] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.148 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.408 [2024-12-14 12:35:22.901984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.408 BaseBdev1 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.408 [ 00:09:23.408 { 00:09:23.408 "name": "BaseBdev1", 00:09:23.408 "aliases": [ 00:09:23.408 "994eaabe-619b-4365-ad7a-0089a9017900" 00:09:23.408 ], 00:09:23.408 "product_name": "Malloc disk", 00:09:23.408 "block_size": 512, 00:09:23.408 "num_blocks": 65536, 00:09:23.408 "uuid": "994eaabe-619b-4365-ad7a-0089a9017900", 00:09:23.408 "assigned_rate_limits": { 00:09:23.408 "rw_ios_per_sec": 0, 00:09:23.408 "rw_mbytes_per_sec": 0, 00:09:23.408 "r_mbytes_per_sec": 0, 00:09:23.408 "w_mbytes_per_sec": 0 00:09:23.408 }, 00:09:23.408 "claimed": true, 00:09:23.408 "claim_type": "exclusive_write", 00:09:23.408 "zoned": false, 00:09:23.408 "supported_io_types": { 00:09:23.408 "read": true, 00:09:23.408 "write": true, 00:09:23.408 "unmap": true, 00:09:23.408 "flush": true, 00:09:23.408 "reset": true, 00:09:23.408 "nvme_admin": false, 00:09:23.408 "nvme_io": false, 00:09:23.408 "nvme_io_md": false, 00:09:23.408 "write_zeroes": true, 00:09:23.408 "zcopy": true, 00:09:23.408 "get_zone_info": false, 00:09:23.408 "zone_management": false, 00:09:23.408 "zone_append": false, 00:09:23.408 "compare": false, 00:09:23.408 "compare_and_write": false, 00:09:23.408 "abort": true, 00:09:23.408 "seek_hole": false, 00:09:23.408 "seek_data": false, 00:09:23.408 "copy": true, 00:09:23.408 "nvme_iov_md": false 00:09:23.408 }, 00:09:23.408 "memory_domains": [ 00:09:23.408 { 00:09:23.408 "dma_device_id": "system", 00:09:23.408 "dma_device_type": 1 00:09:23.408 }, 00:09:23.408 { 00:09:23.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.408 "dma_device_type": 2 00:09:23.408 } 00:09:23.408 ], 00:09:23.408 "driver_specific": {} 00:09:23.408 } 00:09:23.408 ] 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.408 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.408 "name": "Existed_Raid", 00:09:23.408 "uuid": "356c03d0-368c-4d07-ac26-1c7900791a2c", 00:09:23.408 "strip_size_kb": 64, 00:09:23.408 "state": "configuring", 00:09:23.408 "raid_level": "concat", 00:09:23.408 "superblock": true, 00:09:23.408 "num_base_bdevs": 3, 00:09:23.408 "num_base_bdevs_discovered": 1, 00:09:23.408 "num_base_bdevs_operational": 3, 00:09:23.408 "base_bdevs_list": [ 00:09:23.408 { 00:09:23.408 "name": "BaseBdev1", 00:09:23.408 "uuid": "994eaabe-619b-4365-ad7a-0089a9017900", 00:09:23.408 "is_configured": true, 00:09:23.408 "data_offset": 2048, 00:09:23.408 "data_size": 63488 00:09:23.408 }, 00:09:23.408 { 00:09:23.408 "name": "BaseBdev2", 00:09:23.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.408 "is_configured": false, 00:09:23.408 "data_offset": 0, 00:09:23.408 "data_size": 0 00:09:23.408 }, 00:09:23.409 { 00:09:23.409 "name": "BaseBdev3", 00:09:23.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.409 "is_configured": false, 00:09:23.409 "data_offset": 0, 00:09:23.409 "data_size": 0 00:09:23.409 } 00:09:23.409 ] 00:09:23.409 }' 00:09:23.409 12:35:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.409 12:35:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.668 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.669 [2024-12-14 12:35:23.365257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:23.669 [2024-12-14 12:35:23.365394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.669 [2024-12-14 12:35:23.373289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:23.669 [2024-12-14 12:35:23.375127] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:23.669 [2024-12-14 12:35:23.375209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:23.669 [2024-12-14 12:35:23.375224] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:23.669 [2024-12-14 12:35:23.375233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.669 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.928 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.928 "name": "Existed_Raid", 00:09:23.928 "uuid": "7814b7f7-d5d6-4885-87e4-0bbaf621abeb", 00:09:23.928 "strip_size_kb": 64, 00:09:23.928 "state": "configuring", 00:09:23.928 "raid_level": "concat", 00:09:23.928 "superblock": true, 00:09:23.928 "num_base_bdevs": 3, 00:09:23.928 "num_base_bdevs_discovered": 1, 00:09:23.928 "num_base_bdevs_operational": 3, 00:09:23.928 "base_bdevs_list": [ 00:09:23.929 { 00:09:23.929 "name": "BaseBdev1", 00:09:23.929 "uuid": "994eaabe-619b-4365-ad7a-0089a9017900", 00:09:23.929 "is_configured": true, 00:09:23.929 "data_offset": 2048, 00:09:23.929 "data_size": 63488 00:09:23.929 }, 00:09:23.929 { 00:09:23.929 "name": "BaseBdev2", 00:09:23.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.929 "is_configured": false, 00:09:23.929 "data_offset": 0, 00:09:23.929 "data_size": 0 00:09:23.929 }, 00:09:23.929 { 00:09:23.929 "name": "BaseBdev3", 00:09:23.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.929 "is_configured": false, 00:09:23.929 "data_offset": 0, 00:09:23.929 "data_size": 0 00:09:23.929 } 00:09:23.929 ] 00:09:23.929 }' 00:09:23.929 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.929 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.188 [2024-12-14 12:35:23.841587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.188 BaseBdev2 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.188 [ 00:09:24.188 { 00:09:24.188 "name": "BaseBdev2", 00:09:24.188 "aliases": [ 00:09:24.188 "907b1457-6c7e-4503-b7da-516d865796ad" 00:09:24.188 ], 00:09:24.188 "product_name": "Malloc disk", 00:09:24.188 "block_size": 512, 00:09:24.188 "num_blocks": 65536, 00:09:24.188 "uuid": "907b1457-6c7e-4503-b7da-516d865796ad", 00:09:24.188 "assigned_rate_limits": { 00:09:24.188 "rw_ios_per_sec": 0, 00:09:24.188 "rw_mbytes_per_sec": 0, 00:09:24.188 "r_mbytes_per_sec": 0, 00:09:24.188 "w_mbytes_per_sec": 0 00:09:24.188 }, 00:09:24.188 "claimed": true, 00:09:24.188 "claim_type": "exclusive_write", 00:09:24.188 "zoned": false, 00:09:24.188 "supported_io_types": { 00:09:24.188 "read": true, 00:09:24.188 "write": true, 00:09:24.188 "unmap": true, 00:09:24.188 "flush": true, 00:09:24.188 "reset": true, 00:09:24.188 "nvme_admin": false, 00:09:24.188 "nvme_io": false, 00:09:24.188 "nvme_io_md": false, 00:09:24.188 "write_zeroes": true, 00:09:24.188 "zcopy": true, 00:09:24.188 "get_zone_info": false, 00:09:24.188 "zone_management": false, 00:09:24.188 "zone_append": false, 00:09:24.188 "compare": false, 00:09:24.188 "compare_and_write": false, 00:09:24.188 "abort": true, 00:09:24.188 "seek_hole": false, 00:09:24.188 "seek_data": false, 00:09:24.188 "copy": true, 00:09:24.188 "nvme_iov_md": false 00:09:24.188 }, 00:09:24.188 "memory_domains": [ 00:09:24.188 { 00:09:24.188 "dma_device_id": "system", 00:09:24.188 "dma_device_type": 1 00:09:24.188 }, 00:09:24.188 { 00:09:24.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.188 "dma_device_type": 2 00:09:24.188 } 00:09:24.188 ], 00:09:24.188 "driver_specific": {} 00:09:24.188 } 00:09:24.188 ] 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.188 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.448 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.448 "name": "Existed_Raid", 00:09:24.448 "uuid": "7814b7f7-d5d6-4885-87e4-0bbaf621abeb", 00:09:24.448 "strip_size_kb": 64, 00:09:24.448 "state": "configuring", 00:09:24.448 "raid_level": "concat", 00:09:24.448 "superblock": true, 00:09:24.448 "num_base_bdevs": 3, 00:09:24.448 "num_base_bdevs_discovered": 2, 00:09:24.448 "num_base_bdevs_operational": 3, 00:09:24.448 "base_bdevs_list": [ 00:09:24.448 { 00:09:24.448 "name": "BaseBdev1", 00:09:24.448 "uuid": "994eaabe-619b-4365-ad7a-0089a9017900", 00:09:24.448 "is_configured": true, 00:09:24.448 "data_offset": 2048, 00:09:24.448 "data_size": 63488 00:09:24.448 }, 00:09:24.448 { 00:09:24.448 "name": "BaseBdev2", 00:09:24.448 "uuid": "907b1457-6c7e-4503-b7da-516d865796ad", 00:09:24.448 "is_configured": true, 00:09:24.448 "data_offset": 2048, 00:09:24.448 "data_size": 63488 00:09:24.448 }, 00:09:24.448 { 00:09:24.448 "name": "BaseBdev3", 00:09:24.448 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.448 "is_configured": false, 00:09:24.448 "data_offset": 0, 00:09:24.448 "data_size": 0 00:09:24.448 } 00:09:24.448 ] 00:09:24.448 }' 00:09:24.448 12:35:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.448 12:35:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.708 [2024-12-14 12:35:24.330406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.708 [2024-12-14 12:35:24.330788] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:24.708 [2024-12-14 12:35:24.330851] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:24.708 [2024-12-14 12:35:24.331343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:24.708 [2024-12-14 12:35:24.331562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:24.708 [2024-12-14 12:35:24.331605] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev3 00:09:24.708 id_bdev 0x617000007e80 00:09:24.708 [2024-12-14 12:35:24.331819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.708 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.708 [ 00:09:24.708 { 00:09:24.708 "name": "BaseBdev3", 00:09:24.708 "aliases": [ 00:09:24.708 "b1fa4be9-f5f8-47f0-b686-f3fee6b962e7" 00:09:24.708 ], 00:09:24.708 "product_name": "Malloc disk", 00:09:24.708 "block_size": 512, 00:09:24.708 "num_blocks": 65536, 00:09:24.708 "uuid": "b1fa4be9-f5f8-47f0-b686-f3fee6b962e7", 00:09:24.708 "assigned_rate_limits": { 00:09:24.708 "rw_ios_per_sec": 0, 00:09:24.708 "rw_mbytes_per_sec": 0, 00:09:24.708 "r_mbytes_per_sec": 0, 00:09:24.708 "w_mbytes_per_sec": 0 00:09:24.708 }, 00:09:24.708 "claimed": true, 00:09:24.708 "claim_type": "exclusive_write", 00:09:24.708 "zoned": false, 00:09:24.708 "supported_io_types": { 00:09:24.708 "read": true, 00:09:24.708 "write": true, 00:09:24.708 "unmap": true, 00:09:24.708 "flush": true, 00:09:24.708 "reset": true, 00:09:24.708 "nvme_admin": false, 00:09:24.708 "nvme_io": false, 00:09:24.708 "nvme_io_md": false, 00:09:24.708 "write_zeroes": true, 00:09:24.708 "zcopy": true, 00:09:24.708 "get_zone_info": false, 00:09:24.708 "zone_management": false, 00:09:24.709 "zone_append": false, 00:09:24.709 "compare": false, 00:09:24.709 "compare_and_write": false, 00:09:24.709 "abort": true, 00:09:24.709 "seek_hole": false, 00:09:24.709 "seek_data": false, 00:09:24.709 "copy": true, 00:09:24.709 "nvme_iov_md": false 00:09:24.709 }, 00:09:24.709 "memory_domains": [ 00:09:24.709 { 00:09:24.709 "dma_device_id": "system", 00:09:24.709 "dma_device_type": 1 00:09:24.709 }, 00:09:24.709 { 00:09:24.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.709 "dma_device_type": 2 00:09:24.709 } 00:09:24.709 ], 00:09:24.709 "driver_specific": {} 00:09:24.709 } 00:09:24.709 ] 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.709 "name": "Existed_Raid", 00:09:24.709 "uuid": "7814b7f7-d5d6-4885-87e4-0bbaf621abeb", 00:09:24.709 "strip_size_kb": 64, 00:09:24.709 "state": "online", 00:09:24.709 "raid_level": "concat", 00:09:24.709 "superblock": true, 00:09:24.709 "num_base_bdevs": 3, 00:09:24.709 "num_base_bdevs_discovered": 3, 00:09:24.709 "num_base_bdevs_operational": 3, 00:09:24.709 "base_bdevs_list": [ 00:09:24.709 { 00:09:24.709 "name": "BaseBdev1", 00:09:24.709 "uuid": "994eaabe-619b-4365-ad7a-0089a9017900", 00:09:24.709 "is_configured": true, 00:09:24.709 "data_offset": 2048, 00:09:24.709 "data_size": 63488 00:09:24.709 }, 00:09:24.709 { 00:09:24.709 "name": "BaseBdev2", 00:09:24.709 "uuid": "907b1457-6c7e-4503-b7da-516d865796ad", 00:09:24.709 "is_configured": true, 00:09:24.709 "data_offset": 2048, 00:09:24.709 "data_size": 63488 00:09:24.709 }, 00:09:24.709 { 00:09:24.709 "name": "BaseBdev3", 00:09:24.709 "uuid": "b1fa4be9-f5f8-47f0-b686-f3fee6b962e7", 00:09:24.709 "is_configured": true, 00:09:24.709 "data_offset": 2048, 00:09:24.709 "data_size": 63488 00:09:24.709 } 00:09:24.709 ] 00:09:24.709 }' 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.709 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.279 [2024-12-14 12:35:24.794104] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.279 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.279 "name": "Existed_Raid", 00:09:25.279 "aliases": [ 00:09:25.279 "7814b7f7-d5d6-4885-87e4-0bbaf621abeb" 00:09:25.279 ], 00:09:25.279 "product_name": "Raid Volume", 00:09:25.279 "block_size": 512, 00:09:25.279 "num_blocks": 190464, 00:09:25.279 "uuid": "7814b7f7-d5d6-4885-87e4-0bbaf621abeb", 00:09:25.279 "assigned_rate_limits": { 00:09:25.279 "rw_ios_per_sec": 0, 00:09:25.279 "rw_mbytes_per_sec": 0, 00:09:25.279 "r_mbytes_per_sec": 0, 00:09:25.279 "w_mbytes_per_sec": 0 00:09:25.279 }, 00:09:25.279 "claimed": false, 00:09:25.279 "zoned": false, 00:09:25.279 "supported_io_types": { 00:09:25.279 "read": true, 00:09:25.279 "write": true, 00:09:25.279 "unmap": true, 00:09:25.279 "flush": true, 00:09:25.279 "reset": true, 00:09:25.279 "nvme_admin": false, 00:09:25.279 "nvme_io": false, 00:09:25.279 "nvme_io_md": false, 00:09:25.279 "write_zeroes": true, 00:09:25.279 "zcopy": false, 00:09:25.279 "get_zone_info": false, 00:09:25.279 "zone_management": false, 00:09:25.279 "zone_append": false, 00:09:25.279 "compare": false, 00:09:25.279 "compare_and_write": false, 00:09:25.279 "abort": false, 00:09:25.279 "seek_hole": false, 00:09:25.279 "seek_data": false, 00:09:25.279 "copy": false, 00:09:25.279 "nvme_iov_md": false 00:09:25.279 }, 00:09:25.279 "memory_domains": [ 00:09:25.279 { 00:09:25.279 "dma_device_id": "system", 00:09:25.279 "dma_device_type": 1 00:09:25.279 }, 00:09:25.279 { 00:09:25.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.279 "dma_device_type": 2 00:09:25.279 }, 00:09:25.279 { 00:09:25.279 "dma_device_id": "system", 00:09:25.279 "dma_device_type": 1 00:09:25.279 }, 00:09:25.279 { 00:09:25.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.279 "dma_device_type": 2 00:09:25.279 }, 00:09:25.279 { 00:09:25.279 "dma_device_id": "system", 00:09:25.279 "dma_device_type": 1 00:09:25.279 }, 00:09:25.279 { 00:09:25.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.279 "dma_device_type": 2 00:09:25.279 } 00:09:25.279 ], 00:09:25.279 "driver_specific": { 00:09:25.279 "raid": { 00:09:25.279 "uuid": "7814b7f7-d5d6-4885-87e4-0bbaf621abeb", 00:09:25.279 "strip_size_kb": 64, 00:09:25.279 "state": "online", 00:09:25.279 "raid_level": "concat", 00:09:25.279 "superblock": true, 00:09:25.279 "num_base_bdevs": 3, 00:09:25.279 "num_base_bdevs_discovered": 3, 00:09:25.279 "num_base_bdevs_operational": 3, 00:09:25.279 "base_bdevs_list": [ 00:09:25.279 { 00:09:25.279 "name": "BaseBdev1", 00:09:25.279 "uuid": "994eaabe-619b-4365-ad7a-0089a9017900", 00:09:25.279 "is_configured": true, 00:09:25.279 "data_offset": 2048, 00:09:25.279 "data_size": 63488 00:09:25.279 }, 00:09:25.279 { 00:09:25.279 "name": "BaseBdev2", 00:09:25.279 "uuid": "907b1457-6c7e-4503-b7da-516d865796ad", 00:09:25.279 "is_configured": true, 00:09:25.279 "data_offset": 2048, 00:09:25.279 "data_size": 63488 00:09:25.279 }, 00:09:25.279 { 00:09:25.279 "name": "BaseBdev3", 00:09:25.279 "uuid": "b1fa4be9-f5f8-47f0-b686-f3fee6b962e7", 00:09:25.279 "is_configured": true, 00:09:25.280 "data_offset": 2048, 00:09:25.280 "data_size": 63488 00:09:25.280 } 00:09:25.280 ] 00:09:25.280 } 00:09:25.280 } 00:09:25.280 }' 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:25.280 BaseBdev2 00:09:25.280 BaseBdev3' 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.280 12:35:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.540 [2024-12-14 12:35:25.077385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:25.540 [2024-12-14 12:35:25.077471] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.540 [2024-12-14 12:35:25.077563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.540 "name": "Existed_Raid", 00:09:25.540 "uuid": "7814b7f7-d5d6-4885-87e4-0bbaf621abeb", 00:09:25.540 "strip_size_kb": 64, 00:09:25.540 "state": "offline", 00:09:25.540 "raid_level": "concat", 00:09:25.540 "superblock": true, 00:09:25.540 "num_base_bdevs": 3, 00:09:25.540 "num_base_bdevs_discovered": 2, 00:09:25.540 "num_base_bdevs_operational": 2, 00:09:25.540 "base_bdevs_list": [ 00:09:25.540 { 00:09:25.540 "name": null, 00:09:25.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.540 "is_configured": false, 00:09:25.540 "data_offset": 0, 00:09:25.540 "data_size": 63488 00:09:25.540 }, 00:09:25.540 { 00:09:25.540 "name": "BaseBdev2", 00:09:25.540 "uuid": "907b1457-6c7e-4503-b7da-516d865796ad", 00:09:25.540 "is_configured": true, 00:09:25.540 "data_offset": 2048, 00:09:25.540 "data_size": 63488 00:09:25.540 }, 00:09:25.540 { 00:09:25.540 "name": "BaseBdev3", 00:09:25.540 "uuid": "b1fa4be9-f5f8-47f0-b686-f3fee6b962e7", 00:09:25.540 "is_configured": true, 00:09:25.540 "data_offset": 2048, 00:09:25.540 "data_size": 63488 00:09:25.540 } 00:09:25.540 ] 00:09:25.540 }' 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.540 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.109 [2024-12-14 12:35:25.664818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.109 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.109 [2024-12-14 12:35:25.810312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:26.109 [2024-12-14 12:35:25.810426] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 BaseBdev2 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 12:35:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 [ 00:09:26.369 { 00:09:26.369 "name": "BaseBdev2", 00:09:26.369 "aliases": [ 00:09:26.369 "12f28a5f-3a26-4071-8e7b-d5e03088d7db" 00:09:26.369 ], 00:09:26.369 "product_name": "Malloc disk", 00:09:26.369 "block_size": 512, 00:09:26.369 "num_blocks": 65536, 00:09:26.369 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:26.369 "assigned_rate_limits": { 00:09:26.369 "rw_ios_per_sec": 0, 00:09:26.369 "rw_mbytes_per_sec": 0, 00:09:26.369 "r_mbytes_per_sec": 0, 00:09:26.369 "w_mbytes_per_sec": 0 00:09:26.369 }, 00:09:26.369 "claimed": false, 00:09:26.369 "zoned": false, 00:09:26.369 "supported_io_types": { 00:09:26.369 "read": true, 00:09:26.369 "write": true, 00:09:26.369 "unmap": true, 00:09:26.369 "flush": true, 00:09:26.369 "reset": true, 00:09:26.369 "nvme_admin": false, 00:09:26.369 "nvme_io": false, 00:09:26.369 "nvme_io_md": false, 00:09:26.369 "write_zeroes": true, 00:09:26.369 "zcopy": true, 00:09:26.369 "get_zone_info": false, 00:09:26.369 "zone_management": false, 00:09:26.369 "zone_append": false, 00:09:26.369 "compare": false, 00:09:26.369 "compare_and_write": false, 00:09:26.369 "abort": true, 00:09:26.369 "seek_hole": false, 00:09:26.369 "seek_data": false, 00:09:26.369 "copy": true, 00:09:26.369 "nvme_iov_md": false 00:09:26.369 }, 00:09:26.369 "memory_domains": [ 00:09:26.369 { 00:09:26.369 "dma_device_id": "system", 00:09:26.369 "dma_device_type": 1 00:09:26.369 }, 00:09:26.369 { 00:09:26.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.369 "dma_device_type": 2 00:09:26.369 } 00:09:26.369 ], 00:09:26.369 "driver_specific": {} 00:09:26.369 } 00:09:26.369 ] 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 BaseBdev3 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.369 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.369 [ 00:09:26.369 { 00:09:26.369 "name": "BaseBdev3", 00:09:26.369 "aliases": [ 00:09:26.369 "0da0249e-9eab-4719-b2d0-f104f3a1f070" 00:09:26.369 ], 00:09:26.369 "product_name": "Malloc disk", 00:09:26.369 "block_size": 512, 00:09:26.369 "num_blocks": 65536, 00:09:26.369 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:26.369 "assigned_rate_limits": { 00:09:26.369 "rw_ios_per_sec": 0, 00:09:26.369 "rw_mbytes_per_sec": 0, 00:09:26.369 "r_mbytes_per_sec": 0, 00:09:26.369 "w_mbytes_per_sec": 0 00:09:26.369 }, 00:09:26.369 "claimed": false, 00:09:26.369 "zoned": false, 00:09:26.369 "supported_io_types": { 00:09:26.369 "read": true, 00:09:26.369 "write": true, 00:09:26.369 "unmap": true, 00:09:26.629 "flush": true, 00:09:26.629 "reset": true, 00:09:26.629 "nvme_admin": false, 00:09:26.629 "nvme_io": false, 00:09:26.629 "nvme_io_md": false, 00:09:26.629 "write_zeroes": true, 00:09:26.629 "zcopy": true, 00:09:26.629 "get_zone_info": false, 00:09:26.629 "zone_management": false, 00:09:26.629 "zone_append": false, 00:09:26.629 "compare": false, 00:09:26.629 "compare_and_write": false, 00:09:26.629 "abort": true, 00:09:26.629 "seek_hole": false, 00:09:26.629 "seek_data": false, 00:09:26.629 "copy": true, 00:09:26.629 "nvme_iov_md": false 00:09:26.629 }, 00:09:26.629 "memory_domains": [ 00:09:26.629 { 00:09:26.629 "dma_device_id": "system", 00:09:26.629 "dma_device_type": 1 00:09:26.629 }, 00:09:26.629 { 00:09:26.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.629 "dma_device_type": 2 00:09:26.629 } 00:09:26.629 ], 00:09:26.629 "driver_specific": {} 00:09:26.629 } 00:09:26.629 ] 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.629 [2024-12-14 12:35:26.118053] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:26.629 [2024-12-14 12:35:26.118199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:26.629 [2024-12-14 12:35:26.118276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.629 [2024-12-14 12:35:26.120367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.629 "name": "Existed_Raid", 00:09:26.629 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:26.629 "strip_size_kb": 64, 00:09:26.629 "state": "configuring", 00:09:26.629 "raid_level": "concat", 00:09:26.629 "superblock": true, 00:09:26.629 "num_base_bdevs": 3, 00:09:26.629 "num_base_bdevs_discovered": 2, 00:09:26.629 "num_base_bdevs_operational": 3, 00:09:26.629 "base_bdevs_list": [ 00:09:26.629 { 00:09:26.629 "name": "BaseBdev1", 00:09:26.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.629 "is_configured": false, 00:09:26.629 "data_offset": 0, 00:09:26.629 "data_size": 0 00:09:26.629 }, 00:09:26.629 { 00:09:26.629 "name": "BaseBdev2", 00:09:26.629 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:26.629 "is_configured": true, 00:09:26.629 "data_offset": 2048, 00:09:26.629 "data_size": 63488 00:09:26.629 }, 00:09:26.629 { 00:09:26.629 "name": "BaseBdev3", 00:09:26.629 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:26.629 "is_configured": true, 00:09:26.629 "data_offset": 2048, 00:09:26.629 "data_size": 63488 00:09:26.629 } 00:09:26.629 ] 00:09:26.629 }' 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.629 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.889 [2024-12-14 12:35:26.545279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.889 "name": "Existed_Raid", 00:09:26.889 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:26.889 "strip_size_kb": 64, 00:09:26.889 "state": "configuring", 00:09:26.889 "raid_level": "concat", 00:09:26.889 "superblock": true, 00:09:26.889 "num_base_bdevs": 3, 00:09:26.889 "num_base_bdevs_discovered": 1, 00:09:26.889 "num_base_bdevs_operational": 3, 00:09:26.889 "base_bdevs_list": [ 00:09:26.889 { 00:09:26.889 "name": "BaseBdev1", 00:09:26.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.889 "is_configured": false, 00:09:26.889 "data_offset": 0, 00:09:26.889 "data_size": 0 00:09:26.889 }, 00:09:26.889 { 00:09:26.889 "name": null, 00:09:26.889 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:26.889 "is_configured": false, 00:09:26.889 "data_offset": 0, 00:09:26.889 "data_size": 63488 00:09:26.889 }, 00:09:26.889 { 00:09:26.889 "name": "BaseBdev3", 00:09:26.889 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:26.889 "is_configured": true, 00:09:26.889 "data_offset": 2048, 00:09:26.889 "data_size": 63488 00:09:26.889 } 00:09:26.889 ] 00:09:26.889 }' 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.889 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.458 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.458 12:35:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.458 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.458 12:35:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.458 [2024-12-14 12:35:27.044339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.458 BaseBdev1 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.458 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.458 [ 00:09:27.458 { 00:09:27.458 "name": "BaseBdev1", 00:09:27.458 "aliases": [ 00:09:27.458 "aec987dd-1bbe-489f-b3bd-43c85304d5b7" 00:09:27.458 ], 00:09:27.458 "product_name": "Malloc disk", 00:09:27.458 "block_size": 512, 00:09:27.458 "num_blocks": 65536, 00:09:27.458 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:27.459 "assigned_rate_limits": { 00:09:27.459 "rw_ios_per_sec": 0, 00:09:27.459 "rw_mbytes_per_sec": 0, 00:09:27.459 "r_mbytes_per_sec": 0, 00:09:27.459 "w_mbytes_per_sec": 0 00:09:27.459 }, 00:09:27.459 "claimed": true, 00:09:27.459 "claim_type": "exclusive_write", 00:09:27.459 "zoned": false, 00:09:27.459 "supported_io_types": { 00:09:27.459 "read": true, 00:09:27.459 "write": true, 00:09:27.459 "unmap": true, 00:09:27.459 "flush": true, 00:09:27.459 "reset": true, 00:09:27.459 "nvme_admin": false, 00:09:27.459 "nvme_io": false, 00:09:27.459 "nvme_io_md": false, 00:09:27.459 "write_zeroes": true, 00:09:27.459 "zcopy": true, 00:09:27.459 "get_zone_info": false, 00:09:27.459 "zone_management": false, 00:09:27.459 "zone_append": false, 00:09:27.459 "compare": false, 00:09:27.459 "compare_and_write": false, 00:09:27.459 "abort": true, 00:09:27.459 "seek_hole": false, 00:09:27.459 "seek_data": false, 00:09:27.459 "copy": true, 00:09:27.459 "nvme_iov_md": false 00:09:27.459 }, 00:09:27.459 "memory_domains": [ 00:09:27.459 { 00:09:27.459 "dma_device_id": "system", 00:09:27.459 "dma_device_type": 1 00:09:27.459 }, 00:09:27.459 { 00:09:27.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.459 "dma_device_type": 2 00:09:27.459 } 00:09:27.459 ], 00:09:27.459 "driver_specific": {} 00:09:27.459 } 00:09:27.459 ] 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.459 "name": "Existed_Raid", 00:09:27.459 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:27.459 "strip_size_kb": 64, 00:09:27.459 "state": "configuring", 00:09:27.459 "raid_level": "concat", 00:09:27.459 "superblock": true, 00:09:27.459 "num_base_bdevs": 3, 00:09:27.459 "num_base_bdevs_discovered": 2, 00:09:27.459 "num_base_bdevs_operational": 3, 00:09:27.459 "base_bdevs_list": [ 00:09:27.459 { 00:09:27.459 "name": "BaseBdev1", 00:09:27.459 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:27.459 "is_configured": true, 00:09:27.459 "data_offset": 2048, 00:09:27.459 "data_size": 63488 00:09:27.459 }, 00:09:27.459 { 00:09:27.459 "name": null, 00:09:27.459 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:27.459 "is_configured": false, 00:09:27.459 "data_offset": 0, 00:09:27.459 "data_size": 63488 00:09:27.459 }, 00:09:27.459 { 00:09:27.459 "name": "BaseBdev3", 00:09:27.459 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:27.459 "is_configured": true, 00:09:27.459 "data_offset": 2048, 00:09:27.459 "data_size": 63488 00:09:27.459 } 00:09:27.459 ] 00:09:27.459 }' 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.459 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.028 [2024-12-14 12:35:27.575511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.028 "name": "Existed_Raid", 00:09:28.028 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:28.028 "strip_size_kb": 64, 00:09:28.028 "state": "configuring", 00:09:28.028 "raid_level": "concat", 00:09:28.028 "superblock": true, 00:09:28.028 "num_base_bdevs": 3, 00:09:28.028 "num_base_bdevs_discovered": 1, 00:09:28.028 "num_base_bdevs_operational": 3, 00:09:28.028 "base_bdevs_list": [ 00:09:28.028 { 00:09:28.028 "name": "BaseBdev1", 00:09:28.028 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:28.028 "is_configured": true, 00:09:28.028 "data_offset": 2048, 00:09:28.028 "data_size": 63488 00:09:28.028 }, 00:09:28.028 { 00:09:28.028 "name": null, 00:09:28.028 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:28.028 "is_configured": false, 00:09:28.028 "data_offset": 0, 00:09:28.028 "data_size": 63488 00:09:28.028 }, 00:09:28.028 { 00:09:28.028 "name": null, 00:09:28.028 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:28.028 "is_configured": false, 00:09:28.028 "data_offset": 0, 00:09:28.028 "data_size": 63488 00:09:28.028 } 00:09:28.028 ] 00:09:28.028 }' 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.028 12:35:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.289 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.289 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.289 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.289 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.549 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.550 [2024-12-14 12:35:28.062723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.550 "name": "Existed_Raid", 00:09:28.550 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:28.550 "strip_size_kb": 64, 00:09:28.550 "state": "configuring", 00:09:28.550 "raid_level": "concat", 00:09:28.550 "superblock": true, 00:09:28.550 "num_base_bdevs": 3, 00:09:28.550 "num_base_bdevs_discovered": 2, 00:09:28.550 "num_base_bdevs_operational": 3, 00:09:28.550 "base_bdevs_list": [ 00:09:28.550 { 00:09:28.550 "name": "BaseBdev1", 00:09:28.550 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:28.550 "is_configured": true, 00:09:28.550 "data_offset": 2048, 00:09:28.550 "data_size": 63488 00:09:28.550 }, 00:09:28.550 { 00:09:28.550 "name": null, 00:09:28.550 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:28.550 "is_configured": false, 00:09:28.550 "data_offset": 0, 00:09:28.550 "data_size": 63488 00:09:28.550 }, 00:09:28.550 { 00:09:28.550 "name": "BaseBdev3", 00:09:28.550 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:28.550 "is_configured": true, 00:09:28.550 "data_offset": 2048, 00:09:28.550 "data_size": 63488 00:09:28.550 } 00:09:28.550 ] 00:09:28.550 }' 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.550 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.810 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 [2024-12-14 12:35:28.521973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.069 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.069 "name": "Existed_Raid", 00:09:29.069 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:29.069 "strip_size_kb": 64, 00:09:29.069 "state": "configuring", 00:09:29.069 "raid_level": "concat", 00:09:29.069 "superblock": true, 00:09:29.069 "num_base_bdevs": 3, 00:09:29.069 "num_base_bdevs_discovered": 1, 00:09:29.069 "num_base_bdevs_operational": 3, 00:09:29.069 "base_bdevs_list": [ 00:09:29.069 { 00:09:29.069 "name": null, 00:09:29.069 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:29.069 "is_configured": false, 00:09:29.069 "data_offset": 0, 00:09:29.069 "data_size": 63488 00:09:29.069 }, 00:09:29.069 { 00:09:29.069 "name": null, 00:09:29.069 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:29.069 "is_configured": false, 00:09:29.069 "data_offset": 0, 00:09:29.069 "data_size": 63488 00:09:29.069 }, 00:09:29.069 { 00:09:29.069 "name": "BaseBdev3", 00:09:29.069 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:29.069 "is_configured": true, 00:09:29.069 "data_offset": 2048, 00:09:29.069 "data_size": 63488 00:09:29.069 } 00:09:29.069 ] 00:09:29.069 }' 00:09:29.070 12:35:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.070 12:35:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.638 [2024-12-14 12:35:29.150744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.638 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.638 "name": "Existed_Raid", 00:09:29.638 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:29.638 "strip_size_kb": 64, 00:09:29.638 "state": "configuring", 00:09:29.638 "raid_level": "concat", 00:09:29.638 "superblock": true, 00:09:29.638 "num_base_bdevs": 3, 00:09:29.638 "num_base_bdevs_discovered": 2, 00:09:29.638 "num_base_bdevs_operational": 3, 00:09:29.638 "base_bdevs_list": [ 00:09:29.638 { 00:09:29.638 "name": null, 00:09:29.638 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:29.638 "is_configured": false, 00:09:29.638 "data_offset": 0, 00:09:29.638 "data_size": 63488 00:09:29.638 }, 00:09:29.638 { 00:09:29.638 "name": "BaseBdev2", 00:09:29.638 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:29.638 "is_configured": true, 00:09:29.638 "data_offset": 2048, 00:09:29.639 "data_size": 63488 00:09:29.639 }, 00:09:29.639 { 00:09:29.639 "name": "BaseBdev3", 00:09:29.639 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:29.639 "is_configured": true, 00:09:29.639 "data_offset": 2048, 00:09:29.639 "data_size": 63488 00:09:29.639 } 00:09:29.639 ] 00:09:29.639 }' 00:09:29.639 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.639 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.898 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.898 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:29.898 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.898 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.898 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.157 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:30.157 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:30.157 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.157 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.157 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.157 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aec987dd-1bbe-489f-b3bd-43c85304d5b7 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.158 [2024-12-14 12:35:29.717709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:30.158 [2024-12-14 12:35:29.718011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:30.158 [2024-12-14 12:35:29.718034] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.158 [2024-12-14 12:35:29.718306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:30.158 [2024-12-14 12:35:29.718461] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:30.158 [2024-12-14 12:35:29.718472] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:30.158 NewBaseBdev 00:09:30.158 [2024-12-14 12:35:29.718617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.158 [ 00:09:30.158 { 00:09:30.158 "name": "NewBaseBdev", 00:09:30.158 "aliases": [ 00:09:30.158 "aec987dd-1bbe-489f-b3bd-43c85304d5b7" 00:09:30.158 ], 00:09:30.158 "product_name": "Malloc disk", 00:09:30.158 "block_size": 512, 00:09:30.158 "num_blocks": 65536, 00:09:30.158 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:30.158 "assigned_rate_limits": { 00:09:30.158 "rw_ios_per_sec": 0, 00:09:30.158 "rw_mbytes_per_sec": 0, 00:09:30.158 "r_mbytes_per_sec": 0, 00:09:30.158 "w_mbytes_per_sec": 0 00:09:30.158 }, 00:09:30.158 "claimed": true, 00:09:30.158 "claim_type": "exclusive_write", 00:09:30.158 "zoned": false, 00:09:30.158 "supported_io_types": { 00:09:30.158 "read": true, 00:09:30.158 "write": true, 00:09:30.158 "unmap": true, 00:09:30.158 "flush": true, 00:09:30.158 "reset": true, 00:09:30.158 "nvme_admin": false, 00:09:30.158 "nvme_io": false, 00:09:30.158 "nvme_io_md": false, 00:09:30.158 "write_zeroes": true, 00:09:30.158 "zcopy": true, 00:09:30.158 "get_zone_info": false, 00:09:30.158 "zone_management": false, 00:09:30.158 "zone_append": false, 00:09:30.158 "compare": false, 00:09:30.158 "compare_and_write": false, 00:09:30.158 "abort": true, 00:09:30.158 "seek_hole": false, 00:09:30.158 "seek_data": false, 00:09:30.158 "copy": true, 00:09:30.158 "nvme_iov_md": false 00:09:30.158 }, 00:09:30.158 "memory_domains": [ 00:09:30.158 { 00:09:30.158 "dma_device_id": "system", 00:09:30.158 "dma_device_type": 1 00:09:30.158 }, 00:09:30.158 { 00:09:30.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.158 "dma_device_type": 2 00:09:30.158 } 00:09:30.158 ], 00:09:30.158 "driver_specific": {} 00:09:30.158 } 00:09:30.158 ] 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.158 "name": "Existed_Raid", 00:09:30.158 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:30.158 "strip_size_kb": 64, 00:09:30.158 "state": "online", 00:09:30.158 "raid_level": "concat", 00:09:30.158 "superblock": true, 00:09:30.158 "num_base_bdevs": 3, 00:09:30.158 "num_base_bdevs_discovered": 3, 00:09:30.158 "num_base_bdevs_operational": 3, 00:09:30.158 "base_bdevs_list": [ 00:09:30.158 { 00:09:30.158 "name": "NewBaseBdev", 00:09:30.158 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:30.158 "is_configured": true, 00:09:30.158 "data_offset": 2048, 00:09:30.158 "data_size": 63488 00:09:30.158 }, 00:09:30.158 { 00:09:30.158 "name": "BaseBdev2", 00:09:30.158 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:30.158 "is_configured": true, 00:09:30.158 "data_offset": 2048, 00:09:30.158 "data_size": 63488 00:09:30.158 }, 00:09:30.158 { 00:09:30.158 "name": "BaseBdev3", 00:09:30.158 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:30.158 "is_configured": true, 00:09:30.158 "data_offset": 2048, 00:09:30.158 "data_size": 63488 00:09:30.158 } 00:09:30.158 ] 00:09:30.158 }' 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.158 12:35:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.416 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 [2024-12-14 12:35:30.149390] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.675 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.675 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.675 "name": "Existed_Raid", 00:09:30.675 "aliases": [ 00:09:30.675 "0888f9df-a6d4-41fc-9260-2b7f941f5ef5" 00:09:30.675 ], 00:09:30.675 "product_name": "Raid Volume", 00:09:30.675 "block_size": 512, 00:09:30.675 "num_blocks": 190464, 00:09:30.675 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:30.675 "assigned_rate_limits": { 00:09:30.675 "rw_ios_per_sec": 0, 00:09:30.675 "rw_mbytes_per_sec": 0, 00:09:30.675 "r_mbytes_per_sec": 0, 00:09:30.675 "w_mbytes_per_sec": 0 00:09:30.675 }, 00:09:30.675 "claimed": false, 00:09:30.675 "zoned": false, 00:09:30.675 "supported_io_types": { 00:09:30.675 "read": true, 00:09:30.675 "write": true, 00:09:30.675 "unmap": true, 00:09:30.675 "flush": true, 00:09:30.675 "reset": true, 00:09:30.675 "nvme_admin": false, 00:09:30.675 "nvme_io": false, 00:09:30.675 "nvme_io_md": false, 00:09:30.675 "write_zeroes": true, 00:09:30.675 "zcopy": false, 00:09:30.675 "get_zone_info": false, 00:09:30.675 "zone_management": false, 00:09:30.675 "zone_append": false, 00:09:30.675 "compare": false, 00:09:30.675 "compare_and_write": false, 00:09:30.675 "abort": false, 00:09:30.675 "seek_hole": false, 00:09:30.675 "seek_data": false, 00:09:30.675 "copy": false, 00:09:30.675 "nvme_iov_md": false 00:09:30.675 }, 00:09:30.675 "memory_domains": [ 00:09:30.675 { 00:09:30.675 "dma_device_id": "system", 00:09:30.675 "dma_device_type": 1 00:09:30.675 }, 00:09:30.675 { 00:09:30.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.675 "dma_device_type": 2 00:09:30.675 }, 00:09:30.675 { 00:09:30.675 "dma_device_id": "system", 00:09:30.675 "dma_device_type": 1 00:09:30.675 }, 00:09:30.675 { 00:09:30.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.675 "dma_device_type": 2 00:09:30.675 }, 00:09:30.675 { 00:09:30.675 "dma_device_id": "system", 00:09:30.675 "dma_device_type": 1 00:09:30.675 }, 00:09:30.675 { 00:09:30.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.675 "dma_device_type": 2 00:09:30.675 } 00:09:30.675 ], 00:09:30.675 "driver_specific": { 00:09:30.675 "raid": { 00:09:30.675 "uuid": "0888f9df-a6d4-41fc-9260-2b7f941f5ef5", 00:09:30.675 "strip_size_kb": 64, 00:09:30.675 "state": "online", 00:09:30.675 "raid_level": "concat", 00:09:30.675 "superblock": true, 00:09:30.675 "num_base_bdevs": 3, 00:09:30.675 "num_base_bdevs_discovered": 3, 00:09:30.675 "num_base_bdevs_operational": 3, 00:09:30.675 "base_bdevs_list": [ 00:09:30.675 { 00:09:30.675 "name": "NewBaseBdev", 00:09:30.675 "uuid": "aec987dd-1bbe-489f-b3bd-43c85304d5b7", 00:09:30.675 "is_configured": true, 00:09:30.675 "data_offset": 2048, 00:09:30.675 "data_size": 63488 00:09:30.675 }, 00:09:30.675 { 00:09:30.675 "name": "BaseBdev2", 00:09:30.675 "uuid": "12f28a5f-3a26-4071-8e7b-d5e03088d7db", 00:09:30.675 "is_configured": true, 00:09:30.675 "data_offset": 2048, 00:09:30.675 "data_size": 63488 00:09:30.675 }, 00:09:30.676 { 00:09:30.676 "name": "BaseBdev3", 00:09:30.676 "uuid": "0da0249e-9eab-4719-b2d0-f104f3a1f070", 00:09:30.676 "is_configured": true, 00:09:30.676 "data_offset": 2048, 00:09:30.676 "data_size": 63488 00:09:30.676 } 00:09:30.676 ] 00:09:30.676 } 00:09:30.676 } 00:09:30.676 }' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:30.676 BaseBdev2 00:09:30.676 BaseBdev3' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.676 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:30.935 [2024-12-14 12:35:30.416553] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:30.935 [2024-12-14 12:35:30.416584] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.935 [2024-12-14 12:35:30.416683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.935 [2024-12-14 12:35:30.416746] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.935 [2024-12-14 12:35:30.416759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68024 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68024 ']' 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68024 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68024 00:09:30.935 killing process with pid 68024 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68024' 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68024 00:09:30.935 [2024-12-14 12:35:30.457478] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.935 12:35:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68024 00:09:31.194 [2024-12-14 12:35:30.763881] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.573 12:35:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:32.573 00:09:32.573 real 0m10.439s 00:09:32.573 user 0m16.664s 00:09:32.573 sys 0m1.705s 00:09:32.573 ************************************ 00:09:32.573 END TEST raid_state_function_test_sb 00:09:32.573 ************************************ 00:09:32.573 12:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.573 12:35:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.573 12:35:31 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:32.573 12:35:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:32.573 12:35:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.573 12:35:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:32.573 ************************************ 00:09:32.573 START TEST raid_superblock_test 00:09:32.573 ************************************ 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:32.573 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68647 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68647 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68647 ']' 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.574 12:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.574 [2024-12-14 12:35:32.054539] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:32.574 [2024-12-14 12:35:32.054742] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68647 ] 00:09:32.574 [2024-12-14 12:35:32.209603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.833 [2024-12-14 12:35:32.330823] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.833 [2024-12-14 12:35:32.530299] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.833 [2024-12-14 12:35:32.530356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.402 malloc1 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.402 [2024-12-14 12:35:32.957251] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:33.402 [2024-12-14 12:35:32.957439] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.402 [2024-12-14 12:35:32.957474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:33.402 [2024-12-14 12:35:32.957485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.402 [2024-12-14 12:35:32.960082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.402 [2024-12-14 12:35:32.960133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:33.402 pt1 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.402 12:35:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.402 malloc2 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.402 [2024-12-14 12:35:33.015175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.402 [2024-12-14 12:35:33.015306] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.402 [2024-12-14 12:35:33.015350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:33.402 [2024-12-14 12:35:33.015361] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.402 [2024-12-14 12:35:33.017810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.402 [2024-12-14 12:35:33.017847] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.402 pt2 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.402 malloc3 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.402 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.403 [2024-12-14 12:35:33.086612] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:33.403 [2024-12-14 12:35:33.086677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.403 [2024-12-14 12:35:33.086703] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:33.403 [2024-12-14 12:35:33.086713] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.403 [2024-12-14 12:35:33.089173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.403 [2024-12-14 12:35:33.089281] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:33.403 pt3 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.403 [2024-12-14 12:35:33.098640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:33.403 [2024-12-14 12:35:33.100573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.403 [2024-12-14 12:35:33.100690] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:33.403 [2024-12-14 12:35:33.100895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:33.403 [2024-12-14 12:35:33.100912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:33.403 [2024-12-14 12:35:33.101233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:33.403 [2024-12-14 12:35:33.101413] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:33.403 [2024-12-14 12:35:33.101424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:33.403 [2024-12-14 12:35:33.101598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.403 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.662 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.662 "name": "raid_bdev1", 00:09:33.662 "uuid": "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca", 00:09:33.662 "strip_size_kb": 64, 00:09:33.662 "state": "online", 00:09:33.663 "raid_level": "concat", 00:09:33.663 "superblock": true, 00:09:33.663 "num_base_bdevs": 3, 00:09:33.663 "num_base_bdevs_discovered": 3, 00:09:33.663 "num_base_bdevs_operational": 3, 00:09:33.663 "base_bdevs_list": [ 00:09:33.663 { 00:09:33.663 "name": "pt1", 00:09:33.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.663 "is_configured": true, 00:09:33.663 "data_offset": 2048, 00:09:33.663 "data_size": 63488 00:09:33.663 }, 00:09:33.663 { 00:09:33.663 "name": "pt2", 00:09:33.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.663 "is_configured": true, 00:09:33.663 "data_offset": 2048, 00:09:33.663 "data_size": 63488 00:09:33.663 }, 00:09:33.663 { 00:09:33.663 "name": "pt3", 00:09:33.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.663 "is_configured": true, 00:09:33.663 "data_offset": 2048, 00:09:33.663 "data_size": 63488 00:09:33.663 } 00:09:33.663 ] 00:09:33.663 }' 00:09:33.663 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.663 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.922 [2024-12-14 12:35:33.554271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.922 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:33.922 "name": "raid_bdev1", 00:09:33.922 "aliases": [ 00:09:33.922 "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca" 00:09:33.922 ], 00:09:33.922 "product_name": "Raid Volume", 00:09:33.922 "block_size": 512, 00:09:33.922 "num_blocks": 190464, 00:09:33.922 "uuid": "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca", 00:09:33.922 "assigned_rate_limits": { 00:09:33.922 "rw_ios_per_sec": 0, 00:09:33.922 "rw_mbytes_per_sec": 0, 00:09:33.922 "r_mbytes_per_sec": 0, 00:09:33.922 "w_mbytes_per_sec": 0 00:09:33.922 }, 00:09:33.922 "claimed": false, 00:09:33.922 "zoned": false, 00:09:33.923 "supported_io_types": { 00:09:33.923 "read": true, 00:09:33.923 "write": true, 00:09:33.923 "unmap": true, 00:09:33.923 "flush": true, 00:09:33.923 "reset": true, 00:09:33.923 "nvme_admin": false, 00:09:33.923 "nvme_io": false, 00:09:33.923 "nvme_io_md": false, 00:09:33.923 "write_zeroes": true, 00:09:33.923 "zcopy": false, 00:09:33.923 "get_zone_info": false, 00:09:33.923 "zone_management": false, 00:09:33.923 "zone_append": false, 00:09:33.923 "compare": false, 00:09:33.923 "compare_and_write": false, 00:09:33.923 "abort": false, 00:09:33.923 "seek_hole": false, 00:09:33.923 "seek_data": false, 00:09:33.923 "copy": false, 00:09:33.923 "nvme_iov_md": false 00:09:33.923 }, 00:09:33.923 "memory_domains": [ 00:09:33.923 { 00:09:33.923 "dma_device_id": "system", 00:09:33.923 "dma_device_type": 1 00:09:33.923 }, 00:09:33.923 { 00:09:33.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.923 "dma_device_type": 2 00:09:33.923 }, 00:09:33.923 { 00:09:33.923 "dma_device_id": "system", 00:09:33.923 "dma_device_type": 1 00:09:33.923 }, 00:09:33.923 { 00:09:33.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.923 "dma_device_type": 2 00:09:33.923 }, 00:09:33.923 { 00:09:33.923 "dma_device_id": "system", 00:09:33.923 "dma_device_type": 1 00:09:33.923 }, 00:09:33.923 { 00:09:33.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.923 "dma_device_type": 2 00:09:33.923 } 00:09:33.923 ], 00:09:33.923 "driver_specific": { 00:09:33.923 "raid": { 00:09:33.923 "uuid": "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca", 00:09:33.923 "strip_size_kb": 64, 00:09:33.923 "state": "online", 00:09:33.923 "raid_level": "concat", 00:09:33.923 "superblock": true, 00:09:33.923 "num_base_bdevs": 3, 00:09:33.923 "num_base_bdevs_discovered": 3, 00:09:33.923 "num_base_bdevs_operational": 3, 00:09:33.923 "base_bdevs_list": [ 00:09:33.923 { 00:09:33.923 "name": "pt1", 00:09:33.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:33.923 "is_configured": true, 00:09:33.923 "data_offset": 2048, 00:09:33.923 "data_size": 63488 00:09:33.923 }, 00:09:33.923 { 00:09:33.923 "name": "pt2", 00:09:33.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.923 "is_configured": true, 00:09:33.923 "data_offset": 2048, 00:09:33.923 "data_size": 63488 00:09:33.923 }, 00:09:33.923 { 00:09:33.923 "name": "pt3", 00:09:33.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.923 "is_configured": true, 00:09:33.923 "data_offset": 2048, 00:09:33.923 "data_size": 63488 00:09:33.923 } 00:09:33.923 ] 00:09:33.923 } 00:09:33.923 } 00:09:33.923 }' 00:09:33.923 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:33.923 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:33.923 pt2 00:09:33.923 pt3' 00:09:33.923 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:34.183 [2024-12-14 12:35:33.837723] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cb84efea-34bd-4d3c-9c9f-e25f694bb7ca 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z cb84efea-34bd-4d3c-9c9f-e25f694bb7ca ']' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.183 [2024-12-14 12:35:33.889330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.183 [2024-12-14 12:35:33.889430] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.183 [2024-12-14 12:35:33.889575] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.183 [2024-12-14 12:35:33.889673] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.183 [2024-12-14 12:35:33.889723] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:34.183 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.443 12:35:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 12:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 [2024-12-14 12:35:34.037146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:34.444 [2024-12-14 12:35:34.039160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:34.444 [2024-12-14 12:35:34.039265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:34.444 [2024-12-14 12:35:34.039357] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:34.444 [2024-12-14 12:35:34.039499] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:34.444 [2024-12-14 12:35:34.039568] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:34.444 [2024-12-14 12:35:34.039629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.444 [2024-12-14 12:35:34.039641] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:34.444 request: 00:09:34.444 { 00:09:34.444 "name": "raid_bdev1", 00:09:34.444 "raid_level": "concat", 00:09:34.444 "base_bdevs": [ 00:09:34.444 "malloc1", 00:09:34.444 "malloc2", 00:09:34.444 "malloc3" 00:09:34.444 ], 00:09:34.444 "strip_size_kb": 64, 00:09:34.444 "superblock": false, 00:09:34.444 "method": "bdev_raid_create", 00:09:34.444 "req_id": 1 00:09:34.444 } 00:09:34.444 Got JSON-RPC error response 00:09:34.444 response: 00:09:34.444 { 00:09:34.444 "code": -17, 00:09:34.444 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:34.444 } 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 [2024-12-14 12:35:34.104949] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.444 [2024-12-14 12:35:34.105076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.444 [2024-12-14 12:35:34.105115] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:34.444 [2024-12-14 12:35:34.105146] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.444 [2024-12-14 12:35:34.107549] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.444 [2024-12-14 12:35:34.107622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.444 [2024-12-14 12:35:34.107734] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:34.444 [2024-12-14 12:35:34.107823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.444 pt1 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.444 "name": "raid_bdev1", 00:09:34.444 "uuid": "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca", 00:09:34.444 "strip_size_kb": 64, 00:09:34.444 "state": "configuring", 00:09:34.444 "raid_level": "concat", 00:09:34.444 "superblock": true, 00:09:34.444 "num_base_bdevs": 3, 00:09:34.444 "num_base_bdevs_discovered": 1, 00:09:34.444 "num_base_bdevs_operational": 3, 00:09:34.444 "base_bdevs_list": [ 00:09:34.444 { 00:09:34.444 "name": "pt1", 00:09:34.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:34.444 "is_configured": true, 00:09:34.444 "data_offset": 2048, 00:09:34.444 "data_size": 63488 00:09:34.444 }, 00:09:34.444 { 00:09:34.444 "name": null, 00:09:34.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.444 "is_configured": false, 00:09:34.444 "data_offset": 2048, 00:09:34.444 "data_size": 63488 00:09:34.444 }, 00:09:34.444 { 00:09:34.444 "name": null, 00:09:34.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.444 "is_configured": false, 00:09:34.444 "data_offset": 2048, 00:09:34.444 "data_size": 63488 00:09:34.444 } 00:09:34.444 ] 00:09:34.444 }' 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.444 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 [2024-12-14 12:35:34.552173] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.014 [2024-12-14 12:35:34.552290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.014 [2024-12-14 12:35:34.552352] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:35.014 [2024-12-14 12:35:34.552390] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.014 [2024-12-14 12:35:34.552884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.014 [2024-12-14 12:35:34.552946] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.014 [2024-12-14 12:35:34.553082] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:35.014 [2024-12-14 12:35:34.553148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.014 pt2 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 [2024-12-14 12:35:34.560153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.014 "name": "raid_bdev1", 00:09:35.014 "uuid": "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca", 00:09:35.014 "strip_size_kb": 64, 00:09:35.014 "state": "configuring", 00:09:35.014 "raid_level": "concat", 00:09:35.014 "superblock": true, 00:09:35.014 "num_base_bdevs": 3, 00:09:35.014 "num_base_bdevs_discovered": 1, 00:09:35.014 "num_base_bdevs_operational": 3, 00:09:35.014 "base_bdevs_list": [ 00:09:35.014 { 00:09:35.014 "name": "pt1", 00:09:35.014 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.014 "is_configured": true, 00:09:35.014 "data_offset": 2048, 00:09:35.014 "data_size": 63488 00:09:35.014 }, 00:09:35.014 { 00:09:35.014 "name": null, 00:09:35.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.014 "is_configured": false, 00:09:35.014 "data_offset": 0, 00:09:35.014 "data_size": 63488 00:09:35.014 }, 00:09:35.014 { 00:09:35.014 "name": null, 00:09:35.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.014 "is_configured": false, 00:09:35.014 "data_offset": 2048, 00:09:35.014 "data_size": 63488 00:09:35.014 } 00:09:35.014 ] 00:09:35.014 }' 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.014 12:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.274 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:35.274 12:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.274 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:35.274 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.274 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.274 [2024-12-14 12:35:35.007436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:35.274 [2024-12-14 12:35:35.007525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.274 [2024-12-14 12:35:35.007546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:35.274 [2024-12-14 12:35:35.007557] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.274 [2024-12-14 12:35:35.008062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.274 [2024-12-14 12:35:35.008086] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:35.274 [2024-12-14 12:35:35.008176] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:35.274 [2024-12-14 12:35:35.008201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:35.535 pt2 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.535 [2024-12-14 12:35:35.015398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.535 [2024-12-14 12:35:35.015460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.535 [2024-12-14 12:35:35.015479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:35.535 [2024-12-14 12:35:35.015491] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.535 [2024-12-14 12:35:35.015926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.535 [2024-12-14 12:35:35.015947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.535 [2024-12-14 12:35:35.016015] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:35.535 [2024-12-14 12:35:35.016038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.535 [2024-12-14 12:35:35.016204] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:35.535 [2024-12-14 12:35:35.016217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:35.535 [2024-12-14 12:35:35.016476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:35.535 [2024-12-14 12:35:35.016626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:35.535 [2024-12-14 12:35:35.016640] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:35.535 [2024-12-14 12:35:35.016779] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.535 pt3 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.535 "name": "raid_bdev1", 00:09:35.535 "uuid": "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca", 00:09:35.535 "strip_size_kb": 64, 00:09:35.535 "state": "online", 00:09:35.535 "raid_level": "concat", 00:09:35.535 "superblock": true, 00:09:35.535 "num_base_bdevs": 3, 00:09:35.535 "num_base_bdevs_discovered": 3, 00:09:35.535 "num_base_bdevs_operational": 3, 00:09:35.535 "base_bdevs_list": [ 00:09:35.535 { 00:09:35.535 "name": "pt1", 00:09:35.535 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.535 "is_configured": true, 00:09:35.535 "data_offset": 2048, 00:09:35.535 "data_size": 63488 00:09:35.535 }, 00:09:35.535 { 00:09:35.535 "name": "pt2", 00:09:35.535 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.535 "is_configured": true, 00:09:35.535 "data_offset": 2048, 00:09:35.535 "data_size": 63488 00:09:35.535 }, 00:09:35.535 { 00:09:35.535 "name": "pt3", 00:09:35.535 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.535 "is_configured": true, 00:09:35.535 "data_offset": 2048, 00:09:35.535 "data_size": 63488 00:09:35.535 } 00:09:35.535 ] 00:09:35.535 }' 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.535 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.794 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.794 [2024-12-14 12:35:35.447078] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.795 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.795 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.795 "name": "raid_bdev1", 00:09:35.795 "aliases": [ 00:09:35.795 "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca" 00:09:35.795 ], 00:09:35.795 "product_name": "Raid Volume", 00:09:35.795 "block_size": 512, 00:09:35.795 "num_blocks": 190464, 00:09:35.795 "uuid": "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca", 00:09:35.795 "assigned_rate_limits": { 00:09:35.795 "rw_ios_per_sec": 0, 00:09:35.795 "rw_mbytes_per_sec": 0, 00:09:35.795 "r_mbytes_per_sec": 0, 00:09:35.795 "w_mbytes_per_sec": 0 00:09:35.795 }, 00:09:35.795 "claimed": false, 00:09:35.795 "zoned": false, 00:09:35.795 "supported_io_types": { 00:09:35.795 "read": true, 00:09:35.795 "write": true, 00:09:35.795 "unmap": true, 00:09:35.795 "flush": true, 00:09:35.795 "reset": true, 00:09:35.795 "nvme_admin": false, 00:09:35.795 "nvme_io": false, 00:09:35.795 "nvme_io_md": false, 00:09:35.795 "write_zeroes": true, 00:09:35.795 "zcopy": false, 00:09:35.795 "get_zone_info": false, 00:09:35.795 "zone_management": false, 00:09:35.795 "zone_append": false, 00:09:35.795 "compare": false, 00:09:35.795 "compare_and_write": false, 00:09:35.795 "abort": false, 00:09:35.795 "seek_hole": false, 00:09:35.795 "seek_data": false, 00:09:35.795 "copy": false, 00:09:35.795 "nvme_iov_md": false 00:09:35.795 }, 00:09:35.795 "memory_domains": [ 00:09:35.795 { 00:09:35.795 "dma_device_id": "system", 00:09:35.795 "dma_device_type": 1 00:09:35.795 }, 00:09:35.795 { 00:09:35.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.795 "dma_device_type": 2 00:09:35.795 }, 00:09:35.795 { 00:09:35.795 "dma_device_id": "system", 00:09:35.795 "dma_device_type": 1 00:09:35.795 }, 00:09:35.795 { 00:09:35.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.795 "dma_device_type": 2 00:09:35.795 }, 00:09:35.795 { 00:09:35.795 "dma_device_id": "system", 00:09:35.795 "dma_device_type": 1 00:09:35.795 }, 00:09:35.795 { 00:09:35.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.795 "dma_device_type": 2 00:09:35.795 } 00:09:35.795 ], 00:09:35.795 "driver_specific": { 00:09:35.795 "raid": { 00:09:35.795 "uuid": "cb84efea-34bd-4d3c-9c9f-e25f694bb7ca", 00:09:35.795 "strip_size_kb": 64, 00:09:35.795 "state": "online", 00:09:35.795 "raid_level": "concat", 00:09:35.795 "superblock": true, 00:09:35.795 "num_base_bdevs": 3, 00:09:35.795 "num_base_bdevs_discovered": 3, 00:09:35.795 "num_base_bdevs_operational": 3, 00:09:35.795 "base_bdevs_list": [ 00:09:35.795 { 00:09:35.795 "name": "pt1", 00:09:35.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:35.795 "is_configured": true, 00:09:35.795 "data_offset": 2048, 00:09:35.795 "data_size": 63488 00:09:35.795 }, 00:09:35.795 { 00:09:35.795 "name": "pt2", 00:09:35.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.795 "is_configured": true, 00:09:35.795 "data_offset": 2048, 00:09:35.795 "data_size": 63488 00:09:35.795 }, 00:09:35.795 { 00:09:35.795 "name": "pt3", 00:09:35.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.795 "is_configured": true, 00:09:35.795 "data_offset": 2048, 00:09:35.795 "data_size": 63488 00:09:35.795 } 00:09:35.795 ] 00:09:35.795 } 00:09:35.795 } 00:09:35.795 }' 00:09:35.795 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:36.054 pt2 00:09:36.054 pt3' 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.054 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.055 [2024-12-14 12:35:35.726476] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' cb84efea-34bd-4d3c-9c9f-e25f694bb7ca '!=' cb84efea-34bd-4d3c-9c9f-e25f694bb7ca ']' 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68647 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68647 ']' 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68647 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.055 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68647 00:09:36.314 killing process with pid 68647 00:09:36.314 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.314 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.314 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68647' 00:09:36.314 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68647 00:09:36.314 [2024-12-14 12:35:35.813888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.314 [2024-12-14 12:35:35.814000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.314 12:35:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68647 00:09:36.314 [2024-12-14 12:35:35.814109] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.314 [2024-12-14 12:35:35.814125] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:36.574 [2024-12-14 12:35:36.126222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.953 12:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:37.953 00:09:37.953 real 0m5.303s 00:09:37.953 user 0m7.640s 00:09:37.953 sys 0m0.887s 00:09:37.953 12:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.953 12:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 ************************************ 00:09:37.953 END TEST raid_superblock_test 00:09:37.953 ************************************ 00:09:37.953 12:35:37 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:37.953 12:35:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:37.953 12:35:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.953 12:35:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.953 ************************************ 00:09:37.953 START TEST raid_read_error_test 00:09:37.953 ************************************ 00:09:37.953 12:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:37.953 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:37.953 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kxwPHQkqcr 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68906 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68906 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68906 ']' 00:09:37.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.954 12:35:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.954 [2024-12-14 12:35:37.429122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:37.954 [2024-12-14 12:35:37.429258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68906 ] 00:09:37.954 [2024-12-14 12:35:37.589636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.219 [2024-12-14 12:35:37.707747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.219 [2024-12-14 12:35:37.911315] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.219 [2024-12-14 12:35:37.911479] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.795 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.795 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 BaseBdev1_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 true 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 [2024-12-14 12:35:38.332773] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:38.796 [2024-12-14 12:35:38.332830] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.796 [2024-12-14 12:35:38.332851] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:38.796 [2024-12-14 12:35:38.332861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.796 [2024-12-14 12:35:38.334916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.796 [2024-12-14 12:35:38.335018] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:38.796 BaseBdev1 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 BaseBdev2_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 true 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 [2024-12-14 12:35:38.399713] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.796 [2024-12-14 12:35:38.399763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.796 [2024-12-14 12:35:38.399796] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.796 [2024-12-14 12:35:38.399806] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.796 [2024-12-14 12:35:38.401902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.796 [2024-12-14 12:35:38.401940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.796 BaseBdev2 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 BaseBdev3_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 true 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 [2024-12-14 12:35:38.480712] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:38.796 [2024-12-14 12:35:38.480761] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.796 [2024-12-14 12:35:38.480779] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:38.796 [2024-12-14 12:35:38.480788] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.796 [2024-12-14 12:35:38.482900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.796 [2024-12-14 12:35:38.482994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:38.796 BaseBdev3 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 [2024-12-14 12:35:38.492765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.796 [2024-12-14 12:35:38.494544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.796 [2024-12-14 12:35:38.494616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.796 [2024-12-14 12:35:38.494835] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:38.796 [2024-12-14 12:35:38.494848] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:38.796 [2024-12-14 12:35:38.495083] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:38.796 [2024-12-14 12:35:38.495242] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:38.796 [2024-12-14 12:35:38.495256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:38.796 [2024-12-14 12:35:38.495429] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.796 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.055 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.055 "name": "raid_bdev1", 00:09:39.055 "uuid": "4b364420-362d-41d3-ae93-ccefeed7a289", 00:09:39.055 "strip_size_kb": 64, 00:09:39.055 "state": "online", 00:09:39.055 "raid_level": "concat", 00:09:39.055 "superblock": true, 00:09:39.055 "num_base_bdevs": 3, 00:09:39.055 "num_base_bdevs_discovered": 3, 00:09:39.055 "num_base_bdevs_operational": 3, 00:09:39.055 "base_bdevs_list": [ 00:09:39.055 { 00:09:39.055 "name": "BaseBdev1", 00:09:39.055 "uuid": "9b3b20df-9e45-5780-8ae8-1d29d57a63ee", 00:09:39.055 "is_configured": true, 00:09:39.055 "data_offset": 2048, 00:09:39.055 "data_size": 63488 00:09:39.055 }, 00:09:39.055 { 00:09:39.055 "name": "BaseBdev2", 00:09:39.055 "uuid": "08a2337a-373d-5866-8ff3-713574f059f7", 00:09:39.055 "is_configured": true, 00:09:39.055 "data_offset": 2048, 00:09:39.055 "data_size": 63488 00:09:39.055 }, 00:09:39.055 { 00:09:39.055 "name": "BaseBdev3", 00:09:39.055 "uuid": "a0bced66-c723-5cb2-87ee-aa34639859c1", 00:09:39.055 "is_configured": true, 00:09:39.055 "data_offset": 2048, 00:09:39.055 "data_size": 63488 00:09:39.055 } 00:09:39.055 ] 00:09:39.055 }' 00:09:39.055 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.055 12:35:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.315 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:39.315 12:35:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:39.574 [2024-12-14 12:35:39.077135] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.513 12:35:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.513 12:35:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.513 12:35:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.513 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.513 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.513 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.513 12:35:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.513 "name": "raid_bdev1", 00:09:40.513 "uuid": "4b364420-362d-41d3-ae93-ccefeed7a289", 00:09:40.513 "strip_size_kb": 64, 00:09:40.513 "state": "online", 00:09:40.513 "raid_level": "concat", 00:09:40.513 "superblock": true, 00:09:40.513 "num_base_bdevs": 3, 00:09:40.513 "num_base_bdevs_discovered": 3, 00:09:40.513 "num_base_bdevs_operational": 3, 00:09:40.513 "base_bdevs_list": [ 00:09:40.513 { 00:09:40.513 "name": "BaseBdev1", 00:09:40.513 "uuid": "9b3b20df-9e45-5780-8ae8-1d29d57a63ee", 00:09:40.513 "is_configured": true, 00:09:40.513 "data_offset": 2048, 00:09:40.513 "data_size": 63488 00:09:40.513 }, 00:09:40.513 { 00:09:40.513 "name": "BaseBdev2", 00:09:40.513 "uuid": "08a2337a-373d-5866-8ff3-713574f059f7", 00:09:40.513 "is_configured": true, 00:09:40.513 "data_offset": 2048, 00:09:40.513 "data_size": 63488 00:09:40.513 }, 00:09:40.513 { 00:09:40.513 "name": "BaseBdev3", 00:09:40.513 "uuid": "a0bced66-c723-5cb2-87ee-aa34639859c1", 00:09:40.513 "is_configured": true, 00:09:40.513 "data_offset": 2048, 00:09:40.513 "data_size": 63488 00:09:40.513 } 00:09:40.513 ] 00:09:40.513 }' 00:09:40.513 12:35:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.513 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.773 12:35:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:40.773 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.773 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.773 [2024-12-14 12:35:40.457306] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:40.773 [2024-12-14 12:35:40.457404] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.774 [2024-12-14 12:35:40.460435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.774 [2024-12-14 12:35:40.460520] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:40.774 [2024-12-14 12:35:40.460578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.774 [2024-12-14 12:35:40.460634] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:40.774 { 00:09:40.774 "results": [ 00:09:40.774 { 00:09:40.774 "job": "raid_bdev1", 00:09:40.774 "core_mask": "0x1", 00:09:40.774 "workload": "randrw", 00:09:40.774 "percentage": 50, 00:09:40.774 "status": "finished", 00:09:40.774 "queue_depth": 1, 00:09:40.774 "io_size": 131072, 00:09:40.774 "runtime": 1.381183, 00:09:40.774 "iops": 15463.5555172631, 00:09:40.774 "mibps": 1932.9444396578874, 00:09:40.774 "io_failed": 1, 00:09:40.774 "io_timeout": 0, 00:09:40.774 "avg_latency_us": 89.55561410047532, 00:09:40.774 "min_latency_us": 26.494323144104804, 00:09:40.774 "max_latency_us": 1452.380786026201 00:09:40.774 } 00:09:40.774 ], 00:09:40.774 "core_count": 1 00:09:40.774 } 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68906 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68906 ']' 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68906 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68906 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.774 killing process with pid 68906 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68906' 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68906 00:09:40.774 [2024-12-14 12:35:40.506699] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.774 12:35:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68906 00:09:41.034 [2024-12-14 12:35:40.737571] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kxwPHQkqcr 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:42.415 ************************************ 00:09:42.415 END TEST raid_read_error_test 00:09:42.415 ************************************ 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:42.415 00:09:42.415 real 0m4.607s 00:09:42.415 user 0m5.518s 00:09:42.415 sys 0m0.559s 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:42.415 12:35:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.415 12:35:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:42.415 12:35:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:42.415 12:35:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:42.415 12:35:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:42.415 ************************************ 00:09:42.415 START TEST raid_write_error_test 00:09:42.415 ************************************ 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.cROlohGsDe 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69046 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69046 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69046 ']' 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:42.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:42.415 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.415 [2024-12-14 12:35:42.109565] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:42.415 [2024-12-14 12:35:42.109770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69046 ] 00:09:42.675 [2024-12-14 12:35:42.282733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.675 [2024-12-14 12:35:42.401514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.935 [2024-12-14 12:35:42.601004] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.935 [2024-12-14 12:35:42.601079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.505 BaseBdev1_malloc 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.505 true 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.505 12:35:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.505 [2024-12-14 12:35:43.000688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:43.505 [2024-12-14 12:35:43.000785] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.505 [2024-12-14 12:35:43.000824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:43.505 [2024-12-14 12:35:43.000835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.505 [2024-12-14 12:35:43.003051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.505 [2024-12-14 12:35:43.003103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:43.505 BaseBdev1 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.505 BaseBdev2_malloc 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.505 true 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.505 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.505 [2024-12-14 12:35:43.068296] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:43.505 [2024-12-14 12:35:43.068367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.505 [2024-12-14 12:35:43.068383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:43.505 [2024-12-14 12:35:43.068393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.505 [2024-12-14 12:35:43.070514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.506 [2024-12-14 12:35:43.070554] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:43.506 BaseBdev2 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.506 BaseBdev3_malloc 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.506 true 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.506 [2024-12-14 12:35:43.145332] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:43.506 [2024-12-14 12:35:43.145384] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.506 [2024-12-14 12:35:43.145401] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:43.506 [2024-12-14 12:35:43.145411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.506 [2024-12-14 12:35:43.147647] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.506 [2024-12-14 12:35:43.147748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:43.506 BaseBdev3 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.506 [2024-12-14 12:35:43.157393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.506 [2024-12-14 12:35:43.159281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:43.506 [2024-12-14 12:35:43.159371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:43.506 [2024-12-14 12:35:43.159582] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:43.506 [2024-12-14 12:35:43.159606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:43.506 [2024-12-14 12:35:43.159848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:43.506 [2024-12-14 12:35:43.159994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:43.506 [2024-12-14 12:35:43.160022] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:43.506 [2024-12-14 12:35:43.160185] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.506 "name": "raid_bdev1", 00:09:43.506 "uuid": "759c9b89-3daa-4201-bec4-7c57b8f8105c", 00:09:43.506 "strip_size_kb": 64, 00:09:43.506 "state": "online", 00:09:43.506 "raid_level": "concat", 00:09:43.506 "superblock": true, 00:09:43.506 "num_base_bdevs": 3, 00:09:43.506 "num_base_bdevs_discovered": 3, 00:09:43.506 "num_base_bdevs_operational": 3, 00:09:43.506 "base_bdevs_list": [ 00:09:43.506 { 00:09:43.506 "name": "BaseBdev1", 00:09:43.506 "uuid": "4d747c6b-fcbb-593f-9269-9d226423b35a", 00:09:43.506 "is_configured": true, 00:09:43.506 "data_offset": 2048, 00:09:43.506 "data_size": 63488 00:09:43.506 }, 00:09:43.506 { 00:09:43.506 "name": "BaseBdev2", 00:09:43.506 "uuid": "f5e8bc4c-3b23-50df-b9cd-faee37266bb0", 00:09:43.506 "is_configured": true, 00:09:43.506 "data_offset": 2048, 00:09:43.506 "data_size": 63488 00:09:43.506 }, 00:09:43.506 { 00:09:43.506 "name": "BaseBdev3", 00:09:43.506 "uuid": "9e37b704-b934-51fd-ae45-578ae1d91359", 00:09:43.506 "is_configured": true, 00:09:43.506 "data_offset": 2048, 00:09:43.506 "data_size": 63488 00:09:43.506 } 00:09:43.506 ] 00:09:43.506 }' 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.506 12:35:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.075 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:44.075 12:35:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:44.075 [2024-12-14 12:35:43.665787] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.015 "name": "raid_bdev1", 00:09:45.015 "uuid": "759c9b89-3daa-4201-bec4-7c57b8f8105c", 00:09:45.015 "strip_size_kb": 64, 00:09:45.015 "state": "online", 00:09:45.015 "raid_level": "concat", 00:09:45.015 "superblock": true, 00:09:45.015 "num_base_bdevs": 3, 00:09:45.015 "num_base_bdevs_discovered": 3, 00:09:45.015 "num_base_bdevs_operational": 3, 00:09:45.015 "base_bdevs_list": [ 00:09:45.015 { 00:09:45.015 "name": "BaseBdev1", 00:09:45.015 "uuid": "4d747c6b-fcbb-593f-9269-9d226423b35a", 00:09:45.015 "is_configured": true, 00:09:45.015 "data_offset": 2048, 00:09:45.015 "data_size": 63488 00:09:45.015 }, 00:09:45.015 { 00:09:45.015 "name": "BaseBdev2", 00:09:45.015 "uuid": "f5e8bc4c-3b23-50df-b9cd-faee37266bb0", 00:09:45.015 "is_configured": true, 00:09:45.015 "data_offset": 2048, 00:09:45.015 "data_size": 63488 00:09:45.015 }, 00:09:45.015 { 00:09:45.015 "name": "BaseBdev3", 00:09:45.015 "uuid": "9e37b704-b934-51fd-ae45-578ae1d91359", 00:09:45.015 "is_configured": true, 00:09:45.015 "data_offset": 2048, 00:09:45.015 "data_size": 63488 00:09:45.015 } 00:09:45.015 ] 00:09:45.015 }' 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.015 12:35:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.585 12:35:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:45.585 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.585 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.585 [2024-12-14 12:35:45.041949] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:45.585 [2024-12-14 12:35:45.041982] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.585 [2024-12-14 12:35:45.045066] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.585 [2024-12-14 12:35:45.045116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.585 [2024-12-14 12:35:45.045158] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.585 [2024-12-14 12:35:45.045170] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:45.585 { 00:09:45.585 "results": [ 00:09:45.585 { 00:09:45.585 "job": "raid_bdev1", 00:09:45.585 "core_mask": "0x1", 00:09:45.586 "workload": "randrw", 00:09:45.586 "percentage": 50, 00:09:45.586 "status": "finished", 00:09:45.586 "queue_depth": 1, 00:09:45.586 "io_size": 131072, 00:09:45.586 "runtime": 1.377013, 00:09:45.586 "iops": 15174.148682692175, 00:09:45.586 "mibps": 1896.768585336522, 00:09:45.586 "io_failed": 1, 00:09:45.586 "io_timeout": 0, 00:09:45.586 "avg_latency_us": 91.22827928873791, 00:09:45.586 "min_latency_us": 27.053275109170304, 00:09:45.586 "max_latency_us": 1616.9362445414847 00:09:45.586 } 00:09:45.586 ], 00:09:45.586 "core_count": 1 00:09:45.586 } 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69046 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69046 ']' 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69046 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69046 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.586 killing process with pid 69046 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69046' 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69046 00:09:45.586 [2024-12-14 12:35:45.091431] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.586 12:35:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69046 00:09:45.845 [2024-12-14 12:35:45.329690] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:47.235 12:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.cROlohGsDe 00:09:47.235 12:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:47.235 12:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:47.236 12:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:47.236 ************************************ 00:09:47.236 END TEST raid_write_error_test 00:09:47.236 ************************************ 00:09:47.236 12:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:47.236 12:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:47.236 12:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:47.236 12:35:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:47.236 00:09:47.236 real 0m4.547s 00:09:47.236 user 0m5.385s 00:09:47.236 sys 0m0.523s 00:09:47.236 12:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:47.236 12:35:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.236 12:35:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:47.236 12:35:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:47.236 12:35:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:47.236 12:35:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:47.236 12:35:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.236 ************************************ 00:09:47.236 START TEST raid_state_function_test 00:09:47.236 ************************************ 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:47.236 Process raid pid: 69191 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69191 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69191' 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69191 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69191 ']' 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.236 12:35:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.236 [2024-12-14 12:35:46.721393] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:47.236 [2024-12-14 12:35:46.721531] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.236 [2024-12-14 12:35:46.889209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.521 [2024-12-14 12:35:46.999979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.521 [2024-12-14 12:35:47.206774] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.521 [2024-12-14 12:35:47.206917] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.090 [2024-12-14 12:35:47.567431] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.090 [2024-12-14 12:35:47.567483] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.090 [2024-12-14 12:35:47.567493] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.090 [2024-12-14 12:35:47.567519] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.090 [2024-12-14 12:35:47.567526] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.090 [2024-12-14 12:35:47.567534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.090 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.091 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.091 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.091 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.091 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.091 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.091 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.091 "name": "Existed_Raid", 00:09:48.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.091 "strip_size_kb": 0, 00:09:48.091 "state": "configuring", 00:09:48.091 "raid_level": "raid1", 00:09:48.091 "superblock": false, 00:09:48.091 "num_base_bdevs": 3, 00:09:48.091 "num_base_bdevs_discovered": 0, 00:09:48.091 "num_base_bdevs_operational": 3, 00:09:48.091 "base_bdevs_list": [ 00:09:48.091 { 00:09:48.091 "name": "BaseBdev1", 00:09:48.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.091 "is_configured": false, 00:09:48.091 "data_offset": 0, 00:09:48.091 "data_size": 0 00:09:48.091 }, 00:09:48.091 { 00:09:48.091 "name": "BaseBdev2", 00:09:48.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.091 "is_configured": false, 00:09:48.091 "data_offset": 0, 00:09:48.091 "data_size": 0 00:09:48.091 }, 00:09:48.091 { 00:09:48.091 "name": "BaseBdev3", 00:09:48.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.091 "is_configured": false, 00:09:48.091 "data_offset": 0, 00:09:48.091 "data_size": 0 00:09:48.091 } 00:09:48.091 ] 00:09:48.091 }' 00:09:48.091 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.091 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 [2024-12-14 12:35:47.982680] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.351 [2024-12-14 12:35:47.982797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 [2024-12-14 12:35:47.994665] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.351 [2024-12-14 12:35:47.994770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.351 [2024-12-14 12:35:47.994798] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.351 [2024-12-14 12:35:47.994821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.351 [2024-12-14 12:35:47.994839] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.351 [2024-12-14 12:35:47.994861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.351 12:35:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 [2024-12-14 12:35:48.041496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.351 BaseBdev1 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.351 [ 00:09:48.351 { 00:09:48.351 "name": "BaseBdev1", 00:09:48.351 "aliases": [ 00:09:48.351 "10eaf38d-1d23-428f-ae6c-e78741d208ae" 00:09:48.351 ], 00:09:48.351 "product_name": "Malloc disk", 00:09:48.351 "block_size": 512, 00:09:48.351 "num_blocks": 65536, 00:09:48.351 "uuid": "10eaf38d-1d23-428f-ae6c-e78741d208ae", 00:09:48.351 "assigned_rate_limits": { 00:09:48.351 "rw_ios_per_sec": 0, 00:09:48.351 "rw_mbytes_per_sec": 0, 00:09:48.351 "r_mbytes_per_sec": 0, 00:09:48.351 "w_mbytes_per_sec": 0 00:09:48.351 }, 00:09:48.351 "claimed": true, 00:09:48.351 "claim_type": "exclusive_write", 00:09:48.351 "zoned": false, 00:09:48.351 "supported_io_types": { 00:09:48.351 "read": true, 00:09:48.351 "write": true, 00:09:48.351 "unmap": true, 00:09:48.351 "flush": true, 00:09:48.351 "reset": true, 00:09:48.351 "nvme_admin": false, 00:09:48.351 "nvme_io": false, 00:09:48.351 "nvme_io_md": false, 00:09:48.351 "write_zeroes": true, 00:09:48.351 "zcopy": true, 00:09:48.351 "get_zone_info": false, 00:09:48.351 "zone_management": false, 00:09:48.351 "zone_append": false, 00:09:48.351 "compare": false, 00:09:48.351 "compare_and_write": false, 00:09:48.351 "abort": true, 00:09:48.351 "seek_hole": false, 00:09:48.351 "seek_data": false, 00:09:48.351 "copy": true, 00:09:48.351 "nvme_iov_md": false 00:09:48.351 }, 00:09:48.351 "memory_domains": [ 00:09:48.351 { 00:09:48.351 "dma_device_id": "system", 00:09:48.351 "dma_device_type": 1 00:09:48.351 }, 00:09:48.351 { 00:09:48.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.351 "dma_device_type": 2 00:09:48.351 } 00:09:48.351 ], 00:09:48.351 "driver_specific": {} 00:09:48.351 } 00:09:48.351 ] 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.351 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.611 "name": "Existed_Raid", 00:09:48.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.611 "strip_size_kb": 0, 00:09:48.611 "state": "configuring", 00:09:48.611 "raid_level": "raid1", 00:09:48.611 "superblock": false, 00:09:48.611 "num_base_bdevs": 3, 00:09:48.611 "num_base_bdevs_discovered": 1, 00:09:48.611 "num_base_bdevs_operational": 3, 00:09:48.611 "base_bdevs_list": [ 00:09:48.611 { 00:09:48.611 "name": "BaseBdev1", 00:09:48.611 "uuid": "10eaf38d-1d23-428f-ae6c-e78741d208ae", 00:09:48.611 "is_configured": true, 00:09:48.611 "data_offset": 0, 00:09:48.611 "data_size": 65536 00:09:48.611 }, 00:09:48.611 { 00:09:48.611 "name": "BaseBdev2", 00:09:48.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.611 "is_configured": false, 00:09:48.611 "data_offset": 0, 00:09:48.611 "data_size": 0 00:09:48.611 }, 00:09:48.611 { 00:09:48.611 "name": "BaseBdev3", 00:09:48.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.611 "is_configured": false, 00:09:48.611 "data_offset": 0, 00:09:48.611 "data_size": 0 00:09:48.611 } 00:09:48.611 ] 00:09:48.611 }' 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.611 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.871 [2024-12-14 12:35:48.540686] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:48.871 [2024-12-14 12:35:48.540791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.871 [2024-12-14 12:35:48.552699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.871 [2024-12-14 12:35:48.554563] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:48.871 [2024-12-14 12:35:48.554611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:48.871 [2024-12-14 12:35:48.554622] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:48.871 [2024-12-14 12:35:48.554633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.871 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.872 "name": "Existed_Raid", 00:09:48.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.872 "strip_size_kb": 0, 00:09:48.872 "state": "configuring", 00:09:48.872 "raid_level": "raid1", 00:09:48.872 "superblock": false, 00:09:48.872 "num_base_bdevs": 3, 00:09:48.872 "num_base_bdevs_discovered": 1, 00:09:48.872 "num_base_bdevs_operational": 3, 00:09:48.872 "base_bdevs_list": [ 00:09:48.872 { 00:09:48.872 "name": "BaseBdev1", 00:09:48.872 "uuid": "10eaf38d-1d23-428f-ae6c-e78741d208ae", 00:09:48.872 "is_configured": true, 00:09:48.872 "data_offset": 0, 00:09:48.872 "data_size": 65536 00:09:48.872 }, 00:09:48.872 { 00:09:48.872 "name": "BaseBdev2", 00:09:48.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.872 "is_configured": false, 00:09:48.872 "data_offset": 0, 00:09:48.872 "data_size": 0 00:09:48.872 }, 00:09:48.872 { 00:09:48.872 "name": "BaseBdev3", 00:09:48.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.872 "is_configured": false, 00:09:48.872 "data_offset": 0, 00:09:48.872 "data_size": 0 00:09:48.872 } 00:09:48.872 ] 00:09:48.872 }' 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.872 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.441 12:35:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:49.441 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.441 12:35:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.441 [2024-12-14 12:35:49.033217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.441 BaseBdev2 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.441 [ 00:09:49.441 { 00:09:49.441 "name": "BaseBdev2", 00:09:49.441 "aliases": [ 00:09:49.441 "f87fb42d-9d6e-468d-a158-a3724b48daca" 00:09:49.441 ], 00:09:49.441 "product_name": "Malloc disk", 00:09:49.441 "block_size": 512, 00:09:49.441 "num_blocks": 65536, 00:09:49.441 "uuid": "f87fb42d-9d6e-468d-a158-a3724b48daca", 00:09:49.441 "assigned_rate_limits": { 00:09:49.441 "rw_ios_per_sec": 0, 00:09:49.441 "rw_mbytes_per_sec": 0, 00:09:49.441 "r_mbytes_per_sec": 0, 00:09:49.441 "w_mbytes_per_sec": 0 00:09:49.441 }, 00:09:49.441 "claimed": true, 00:09:49.441 "claim_type": "exclusive_write", 00:09:49.441 "zoned": false, 00:09:49.441 "supported_io_types": { 00:09:49.441 "read": true, 00:09:49.441 "write": true, 00:09:49.441 "unmap": true, 00:09:49.441 "flush": true, 00:09:49.441 "reset": true, 00:09:49.441 "nvme_admin": false, 00:09:49.441 "nvme_io": false, 00:09:49.441 "nvme_io_md": false, 00:09:49.441 "write_zeroes": true, 00:09:49.441 "zcopy": true, 00:09:49.441 "get_zone_info": false, 00:09:49.441 "zone_management": false, 00:09:49.441 "zone_append": false, 00:09:49.441 "compare": false, 00:09:49.441 "compare_and_write": false, 00:09:49.441 "abort": true, 00:09:49.441 "seek_hole": false, 00:09:49.441 "seek_data": false, 00:09:49.441 "copy": true, 00:09:49.441 "nvme_iov_md": false 00:09:49.441 }, 00:09:49.441 "memory_domains": [ 00:09:49.441 { 00:09:49.441 "dma_device_id": "system", 00:09:49.441 "dma_device_type": 1 00:09:49.441 }, 00:09:49.441 { 00:09:49.441 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.441 "dma_device_type": 2 00:09:49.441 } 00:09:49.441 ], 00:09:49.441 "driver_specific": {} 00:09:49.441 } 00:09:49.441 ] 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.441 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.442 "name": "Existed_Raid", 00:09:49.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.442 "strip_size_kb": 0, 00:09:49.442 "state": "configuring", 00:09:49.442 "raid_level": "raid1", 00:09:49.442 "superblock": false, 00:09:49.442 "num_base_bdevs": 3, 00:09:49.442 "num_base_bdevs_discovered": 2, 00:09:49.442 "num_base_bdevs_operational": 3, 00:09:49.442 "base_bdevs_list": [ 00:09:49.442 { 00:09:49.442 "name": "BaseBdev1", 00:09:49.442 "uuid": "10eaf38d-1d23-428f-ae6c-e78741d208ae", 00:09:49.442 "is_configured": true, 00:09:49.442 "data_offset": 0, 00:09:49.442 "data_size": 65536 00:09:49.442 }, 00:09:49.442 { 00:09:49.442 "name": "BaseBdev2", 00:09:49.442 "uuid": "f87fb42d-9d6e-468d-a158-a3724b48daca", 00:09:49.442 "is_configured": true, 00:09:49.442 "data_offset": 0, 00:09:49.442 "data_size": 65536 00:09:49.442 }, 00:09:49.442 { 00:09:49.442 "name": "BaseBdev3", 00:09:49.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.442 "is_configured": false, 00:09:49.442 "data_offset": 0, 00:09:49.442 "data_size": 0 00:09:49.442 } 00:09:49.442 ] 00:09:49.442 }' 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.442 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.012 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:50.012 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.012 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.012 [2024-12-14 12:35:49.528301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:50.012 [2024-12-14 12:35:49.528352] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:50.012 [2024-12-14 12:35:49.528365] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:50.012 [2024-12-14 12:35:49.528632] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:50.012 [2024-12-14 12:35:49.528799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:50.013 [2024-12-14 12:35:49.528808] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:50.013 [2024-12-14 12:35:49.529158] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:50.013 BaseBdev3 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.013 [ 00:09:50.013 { 00:09:50.013 "name": "BaseBdev3", 00:09:50.013 "aliases": [ 00:09:50.013 "90fbd058-47b5-4904-bda9-6d665419a45a" 00:09:50.013 ], 00:09:50.013 "product_name": "Malloc disk", 00:09:50.013 "block_size": 512, 00:09:50.013 "num_blocks": 65536, 00:09:50.013 "uuid": "90fbd058-47b5-4904-bda9-6d665419a45a", 00:09:50.013 "assigned_rate_limits": { 00:09:50.013 "rw_ios_per_sec": 0, 00:09:50.013 "rw_mbytes_per_sec": 0, 00:09:50.013 "r_mbytes_per_sec": 0, 00:09:50.013 "w_mbytes_per_sec": 0 00:09:50.013 }, 00:09:50.013 "claimed": true, 00:09:50.013 "claim_type": "exclusive_write", 00:09:50.013 "zoned": false, 00:09:50.013 "supported_io_types": { 00:09:50.013 "read": true, 00:09:50.013 "write": true, 00:09:50.013 "unmap": true, 00:09:50.013 "flush": true, 00:09:50.013 "reset": true, 00:09:50.013 "nvme_admin": false, 00:09:50.013 "nvme_io": false, 00:09:50.013 "nvme_io_md": false, 00:09:50.013 "write_zeroes": true, 00:09:50.013 "zcopy": true, 00:09:50.013 "get_zone_info": false, 00:09:50.013 "zone_management": false, 00:09:50.013 "zone_append": false, 00:09:50.013 "compare": false, 00:09:50.013 "compare_and_write": false, 00:09:50.013 "abort": true, 00:09:50.013 "seek_hole": false, 00:09:50.013 "seek_data": false, 00:09:50.013 "copy": true, 00:09:50.013 "nvme_iov_md": false 00:09:50.013 }, 00:09:50.013 "memory_domains": [ 00:09:50.013 { 00:09:50.013 "dma_device_id": "system", 00:09:50.013 "dma_device_type": 1 00:09:50.013 }, 00:09:50.013 { 00:09:50.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.013 "dma_device_type": 2 00:09:50.013 } 00:09:50.013 ], 00:09:50.013 "driver_specific": {} 00:09:50.013 } 00:09:50.013 ] 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.013 "name": "Existed_Raid", 00:09:50.013 "uuid": "7f7159fe-be2f-4698-b6af-80fc9becfee6", 00:09:50.013 "strip_size_kb": 0, 00:09:50.013 "state": "online", 00:09:50.013 "raid_level": "raid1", 00:09:50.013 "superblock": false, 00:09:50.013 "num_base_bdevs": 3, 00:09:50.013 "num_base_bdevs_discovered": 3, 00:09:50.013 "num_base_bdevs_operational": 3, 00:09:50.013 "base_bdevs_list": [ 00:09:50.013 { 00:09:50.013 "name": "BaseBdev1", 00:09:50.013 "uuid": "10eaf38d-1d23-428f-ae6c-e78741d208ae", 00:09:50.013 "is_configured": true, 00:09:50.013 "data_offset": 0, 00:09:50.013 "data_size": 65536 00:09:50.013 }, 00:09:50.013 { 00:09:50.013 "name": "BaseBdev2", 00:09:50.013 "uuid": "f87fb42d-9d6e-468d-a158-a3724b48daca", 00:09:50.013 "is_configured": true, 00:09:50.013 "data_offset": 0, 00:09:50.013 "data_size": 65536 00:09:50.013 }, 00:09:50.013 { 00:09:50.013 "name": "BaseBdev3", 00:09:50.013 "uuid": "90fbd058-47b5-4904-bda9-6d665419a45a", 00:09:50.013 "is_configured": true, 00:09:50.013 "data_offset": 0, 00:09:50.013 "data_size": 65536 00:09:50.013 } 00:09:50.013 ] 00:09:50.013 }' 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.013 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.273 [2024-12-14 12:35:49.967920] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:50.273 12:35:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:50.533 "name": "Existed_Raid", 00:09:50.533 "aliases": [ 00:09:50.533 "7f7159fe-be2f-4698-b6af-80fc9becfee6" 00:09:50.533 ], 00:09:50.533 "product_name": "Raid Volume", 00:09:50.533 "block_size": 512, 00:09:50.533 "num_blocks": 65536, 00:09:50.533 "uuid": "7f7159fe-be2f-4698-b6af-80fc9becfee6", 00:09:50.533 "assigned_rate_limits": { 00:09:50.533 "rw_ios_per_sec": 0, 00:09:50.533 "rw_mbytes_per_sec": 0, 00:09:50.533 "r_mbytes_per_sec": 0, 00:09:50.533 "w_mbytes_per_sec": 0 00:09:50.533 }, 00:09:50.533 "claimed": false, 00:09:50.533 "zoned": false, 00:09:50.533 "supported_io_types": { 00:09:50.533 "read": true, 00:09:50.533 "write": true, 00:09:50.533 "unmap": false, 00:09:50.533 "flush": false, 00:09:50.533 "reset": true, 00:09:50.533 "nvme_admin": false, 00:09:50.533 "nvme_io": false, 00:09:50.533 "nvme_io_md": false, 00:09:50.533 "write_zeroes": true, 00:09:50.533 "zcopy": false, 00:09:50.533 "get_zone_info": false, 00:09:50.533 "zone_management": false, 00:09:50.533 "zone_append": false, 00:09:50.533 "compare": false, 00:09:50.533 "compare_and_write": false, 00:09:50.533 "abort": false, 00:09:50.533 "seek_hole": false, 00:09:50.533 "seek_data": false, 00:09:50.533 "copy": false, 00:09:50.533 "nvme_iov_md": false 00:09:50.533 }, 00:09:50.533 "memory_domains": [ 00:09:50.533 { 00:09:50.533 "dma_device_id": "system", 00:09:50.533 "dma_device_type": 1 00:09:50.533 }, 00:09:50.533 { 00:09:50.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.533 "dma_device_type": 2 00:09:50.533 }, 00:09:50.533 { 00:09:50.533 "dma_device_id": "system", 00:09:50.533 "dma_device_type": 1 00:09:50.533 }, 00:09:50.533 { 00:09:50.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.533 "dma_device_type": 2 00:09:50.533 }, 00:09:50.533 { 00:09:50.533 "dma_device_id": "system", 00:09:50.533 "dma_device_type": 1 00:09:50.533 }, 00:09:50.533 { 00:09:50.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.533 "dma_device_type": 2 00:09:50.533 } 00:09:50.533 ], 00:09:50.533 "driver_specific": { 00:09:50.533 "raid": { 00:09:50.533 "uuid": "7f7159fe-be2f-4698-b6af-80fc9becfee6", 00:09:50.533 "strip_size_kb": 0, 00:09:50.533 "state": "online", 00:09:50.533 "raid_level": "raid1", 00:09:50.533 "superblock": false, 00:09:50.533 "num_base_bdevs": 3, 00:09:50.533 "num_base_bdevs_discovered": 3, 00:09:50.533 "num_base_bdevs_operational": 3, 00:09:50.533 "base_bdevs_list": [ 00:09:50.533 { 00:09:50.533 "name": "BaseBdev1", 00:09:50.533 "uuid": "10eaf38d-1d23-428f-ae6c-e78741d208ae", 00:09:50.533 "is_configured": true, 00:09:50.533 "data_offset": 0, 00:09:50.533 "data_size": 65536 00:09:50.533 }, 00:09:50.533 { 00:09:50.533 "name": "BaseBdev2", 00:09:50.533 "uuid": "f87fb42d-9d6e-468d-a158-a3724b48daca", 00:09:50.533 "is_configured": true, 00:09:50.533 "data_offset": 0, 00:09:50.533 "data_size": 65536 00:09:50.533 }, 00:09:50.533 { 00:09:50.533 "name": "BaseBdev3", 00:09:50.533 "uuid": "90fbd058-47b5-4904-bda9-6d665419a45a", 00:09:50.533 "is_configured": true, 00:09:50.533 "data_offset": 0, 00:09:50.533 "data_size": 65536 00:09:50.533 } 00:09:50.533 ] 00:09:50.533 } 00:09:50.533 } 00:09:50.533 }' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:50.533 BaseBdev2 00:09:50.533 BaseBdev3' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.533 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.534 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.534 [2024-12-14 12:35:50.223252] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.794 "name": "Existed_Raid", 00:09:50.794 "uuid": "7f7159fe-be2f-4698-b6af-80fc9becfee6", 00:09:50.794 "strip_size_kb": 0, 00:09:50.794 "state": "online", 00:09:50.794 "raid_level": "raid1", 00:09:50.794 "superblock": false, 00:09:50.794 "num_base_bdevs": 3, 00:09:50.794 "num_base_bdevs_discovered": 2, 00:09:50.794 "num_base_bdevs_operational": 2, 00:09:50.794 "base_bdevs_list": [ 00:09:50.794 { 00:09:50.794 "name": null, 00:09:50.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.794 "is_configured": false, 00:09:50.794 "data_offset": 0, 00:09:50.794 "data_size": 65536 00:09:50.794 }, 00:09:50.794 { 00:09:50.794 "name": "BaseBdev2", 00:09:50.794 "uuid": "f87fb42d-9d6e-468d-a158-a3724b48daca", 00:09:50.794 "is_configured": true, 00:09:50.794 "data_offset": 0, 00:09:50.794 "data_size": 65536 00:09:50.794 }, 00:09:50.794 { 00:09:50.794 "name": "BaseBdev3", 00:09:50.794 "uuid": "90fbd058-47b5-4904-bda9-6d665419a45a", 00:09:50.794 "is_configured": true, 00:09:50.794 "data_offset": 0, 00:09:50.794 "data_size": 65536 00:09:50.794 } 00:09:50.794 ] 00:09:50.794 }' 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.794 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.053 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.318 [2024-12-14 12:35:50.790314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.318 12:35:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.318 [2024-12-14 12:35:50.953083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.318 [2024-12-14 12:35:50.953190] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:51.318 [2024-12-14 12:35:51.047768] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:51.318 [2024-12-14 12:35:51.047921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:51.318 [2024-12-14 12:35:51.047939] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:51.318 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.318 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:51.318 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:51.577 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.577 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 BaseBdev2 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 [ 00:09:51.578 { 00:09:51.578 "name": "BaseBdev2", 00:09:51.578 "aliases": [ 00:09:51.578 "3dc50f40-e0f7-4a5b-8938-cc270304c1d6" 00:09:51.578 ], 00:09:51.578 "product_name": "Malloc disk", 00:09:51.578 "block_size": 512, 00:09:51.578 "num_blocks": 65536, 00:09:51.578 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:51.578 "assigned_rate_limits": { 00:09:51.578 "rw_ios_per_sec": 0, 00:09:51.578 "rw_mbytes_per_sec": 0, 00:09:51.578 "r_mbytes_per_sec": 0, 00:09:51.578 "w_mbytes_per_sec": 0 00:09:51.578 }, 00:09:51.578 "claimed": false, 00:09:51.578 "zoned": false, 00:09:51.578 "supported_io_types": { 00:09:51.578 "read": true, 00:09:51.578 "write": true, 00:09:51.578 "unmap": true, 00:09:51.578 "flush": true, 00:09:51.578 "reset": true, 00:09:51.578 "nvme_admin": false, 00:09:51.578 "nvme_io": false, 00:09:51.578 "nvme_io_md": false, 00:09:51.578 "write_zeroes": true, 00:09:51.578 "zcopy": true, 00:09:51.578 "get_zone_info": false, 00:09:51.578 "zone_management": false, 00:09:51.578 "zone_append": false, 00:09:51.578 "compare": false, 00:09:51.578 "compare_and_write": false, 00:09:51.578 "abort": true, 00:09:51.578 "seek_hole": false, 00:09:51.578 "seek_data": false, 00:09:51.578 "copy": true, 00:09:51.578 "nvme_iov_md": false 00:09:51.578 }, 00:09:51.578 "memory_domains": [ 00:09:51.578 { 00:09:51.578 "dma_device_id": "system", 00:09:51.578 "dma_device_type": 1 00:09:51.578 }, 00:09:51.578 { 00:09:51.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.578 "dma_device_type": 2 00:09:51.578 } 00:09:51.578 ], 00:09:51.578 "driver_specific": {} 00:09:51.578 } 00:09:51.578 ] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 BaseBdev3 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 [ 00:09:51.578 { 00:09:51.578 "name": "BaseBdev3", 00:09:51.578 "aliases": [ 00:09:51.578 "2d2385c5-770e-40c1-b884-8a6a97d1f90a" 00:09:51.578 ], 00:09:51.578 "product_name": "Malloc disk", 00:09:51.578 "block_size": 512, 00:09:51.578 "num_blocks": 65536, 00:09:51.578 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:51.578 "assigned_rate_limits": { 00:09:51.578 "rw_ios_per_sec": 0, 00:09:51.578 "rw_mbytes_per_sec": 0, 00:09:51.578 "r_mbytes_per_sec": 0, 00:09:51.578 "w_mbytes_per_sec": 0 00:09:51.578 }, 00:09:51.578 "claimed": false, 00:09:51.578 "zoned": false, 00:09:51.578 "supported_io_types": { 00:09:51.578 "read": true, 00:09:51.578 "write": true, 00:09:51.578 "unmap": true, 00:09:51.578 "flush": true, 00:09:51.578 "reset": true, 00:09:51.578 "nvme_admin": false, 00:09:51.578 "nvme_io": false, 00:09:51.578 "nvme_io_md": false, 00:09:51.578 "write_zeroes": true, 00:09:51.578 "zcopy": true, 00:09:51.578 "get_zone_info": false, 00:09:51.578 "zone_management": false, 00:09:51.578 "zone_append": false, 00:09:51.578 "compare": false, 00:09:51.578 "compare_and_write": false, 00:09:51.578 "abort": true, 00:09:51.578 "seek_hole": false, 00:09:51.578 "seek_data": false, 00:09:51.578 "copy": true, 00:09:51.578 "nvme_iov_md": false 00:09:51.578 }, 00:09:51.578 "memory_domains": [ 00:09:51.578 { 00:09:51.578 "dma_device_id": "system", 00:09:51.578 "dma_device_type": 1 00:09:51.578 }, 00:09:51.578 { 00:09:51.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.578 "dma_device_type": 2 00:09:51.578 } 00:09:51.578 ], 00:09:51.578 "driver_specific": {} 00:09:51.578 } 00:09:51.578 ] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 [2024-12-14 12:35:51.265883] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.578 [2024-12-14 12:35:51.265972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.578 [2024-12-14 12:35:51.266028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.578 [2024-12-14 12:35:51.267970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.578 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.579 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.579 "name": "Existed_Raid", 00:09:51.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.579 "strip_size_kb": 0, 00:09:51.579 "state": "configuring", 00:09:51.579 "raid_level": "raid1", 00:09:51.579 "superblock": false, 00:09:51.579 "num_base_bdevs": 3, 00:09:51.579 "num_base_bdevs_discovered": 2, 00:09:51.579 "num_base_bdevs_operational": 3, 00:09:51.579 "base_bdevs_list": [ 00:09:51.579 { 00:09:51.579 "name": "BaseBdev1", 00:09:51.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.579 "is_configured": false, 00:09:51.579 "data_offset": 0, 00:09:51.579 "data_size": 0 00:09:51.579 }, 00:09:51.579 { 00:09:51.579 "name": "BaseBdev2", 00:09:51.579 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:51.579 "is_configured": true, 00:09:51.579 "data_offset": 0, 00:09:51.579 "data_size": 65536 00:09:51.579 }, 00:09:51.579 { 00:09:51.579 "name": "BaseBdev3", 00:09:51.579 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:51.579 "is_configured": true, 00:09:51.579 "data_offset": 0, 00:09:51.579 "data_size": 65536 00:09:51.579 } 00:09:51.579 ] 00:09:51.579 }' 00:09:51.579 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.579 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.149 [2024-12-14 12:35:51.697211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.149 "name": "Existed_Raid", 00:09:52.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.149 "strip_size_kb": 0, 00:09:52.149 "state": "configuring", 00:09:52.149 "raid_level": "raid1", 00:09:52.149 "superblock": false, 00:09:52.149 "num_base_bdevs": 3, 00:09:52.149 "num_base_bdevs_discovered": 1, 00:09:52.149 "num_base_bdevs_operational": 3, 00:09:52.149 "base_bdevs_list": [ 00:09:52.149 { 00:09:52.149 "name": "BaseBdev1", 00:09:52.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.149 "is_configured": false, 00:09:52.149 "data_offset": 0, 00:09:52.149 "data_size": 0 00:09:52.149 }, 00:09:52.149 { 00:09:52.149 "name": null, 00:09:52.149 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:52.149 "is_configured": false, 00:09:52.149 "data_offset": 0, 00:09:52.149 "data_size": 65536 00:09:52.149 }, 00:09:52.149 { 00:09:52.149 "name": "BaseBdev3", 00:09:52.149 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:52.149 "is_configured": true, 00:09:52.149 "data_offset": 0, 00:09:52.149 "data_size": 65536 00:09:52.149 } 00:09:52.149 ] 00:09:52.149 }' 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.149 12:35:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.718 [2024-12-14 12:35:52.225499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.718 BaseBdev1 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.718 [ 00:09:52.718 { 00:09:52.718 "name": "BaseBdev1", 00:09:52.718 "aliases": [ 00:09:52.718 "8fb19dfe-b79e-4d3e-8d9d-1804a0776485" 00:09:52.718 ], 00:09:52.718 "product_name": "Malloc disk", 00:09:52.718 "block_size": 512, 00:09:52.718 "num_blocks": 65536, 00:09:52.718 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:52.718 "assigned_rate_limits": { 00:09:52.718 "rw_ios_per_sec": 0, 00:09:52.718 "rw_mbytes_per_sec": 0, 00:09:52.718 "r_mbytes_per_sec": 0, 00:09:52.718 "w_mbytes_per_sec": 0 00:09:52.718 }, 00:09:52.718 "claimed": true, 00:09:52.718 "claim_type": "exclusive_write", 00:09:52.718 "zoned": false, 00:09:52.718 "supported_io_types": { 00:09:52.718 "read": true, 00:09:52.718 "write": true, 00:09:52.718 "unmap": true, 00:09:52.718 "flush": true, 00:09:52.718 "reset": true, 00:09:52.718 "nvme_admin": false, 00:09:52.718 "nvme_io": false, 00:09:52.718 "nvme_io_md": false, 00:09:52.718 "write_zeroes": true, 00:09:52.718 "zcopy": true, 00:09:52.718 "get_zone_info": false, 00:09:52.718 "zone_management": false, 00:09:52.718 "zone_append": false, 00:09:52.718 "compare": false, 00:09:52.718 "compare_and_write": false, 00:09:52.718 "abort": true, 00:09:52.718 "seek_hole": false, 00:09:52.718 "seek_data": false, 00:09:52.718 "copy": true, 00:09:52.718 "nvme_iov_md": false 00:09:52.718 }, 00:09:52.718 "memory_domains": [ 00:09:52.718 { 00:09:52.718 "dma_device_id": "system", 00:09:52.718 "dma_device_type": 1 00:09:52.718 }, 00:09:52.718 { 00:09:52.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.718 "dma_device_type": 2 00:09:52.718 } 00:09:52.718 ], 00:09:52.718 "driver_specific": {} 00:09:52.718 } 00:09:52.718 ] 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.718 "name": "Existed_Raid", 00:09:52.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.718 "strip_size_kb": 0, 00:09:52.718 "state": "configuring", 00:09:52.718 "raid_level": "raid1", 00:09:52.718 "superblock": false, 00:09:52.718 "num_base_bdevs": 3, 00:09:52.718 "num_base_bdevs_discovered": 2, 00:09:52.718 "num_base_bdevs_operational": 3, 00:09:52.718 "base_bdevs_list": [ 00:09:52.718 { 00:09:52.718 "name": "BaseBdev1", 00:09:52.718 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:52.718 "is_configured": true, 00:09:52.718 "data_offset": 0, 00:09:52.718 "data_size": 65536 00:09:52.718 }, 00:09:52.718 { 00:09:52.718 "name": null, 00:09:52.718 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:52.718 "is_configured": false, 00:09:52.718 "data_offset": 0, 00:09:52.718 "data_size": 65536 00:09:52.718 }, 00:09:52.718 { 00:09:52.718 "name": "BaseBdev3", 00:09:52.718 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:52.718 "is_configured": true, 00:09:52.718 "data_offset": 0, 00:09:52.718 "data_size": 65536 00:09:52.718 } 00:09:52.718 ] 00:09:52.718 }' 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.718 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.978 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.978 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.978 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.978 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.238 [2024-12-14 12:35:52.740661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.238 "name": "Existed_Raid", 00:09:53.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.238 "strip_size_kb": 0, 00:09:53.238 "state": "configuring", 00:09:53.238 "raid_level": "raid1", 00:09:53.238 "superblock": false, 00:09:53.238 "num_base_bdevs": 3, 00:09:53.238 "num_base_bdevs_discovered": 1, 00:09:53.238 "num_base_bdevs_operational": 3, 00:09:53.238 "base_bdevs_list": [ 00:09:53.238 { 00:09:53.238 "name": "BaseBdev1", 00:09:53.238 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:53.238 "is_configured": true, 00:09:53.238 "data_offset": 0, 00:09:53.238 "data_size": 65536 00:09:53.238 }, 00:09:53.238 { 00:09:53.238 "name": null, 00:09:53.238 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:53.238 "is_configured": false, 00:09:53.238 "data_offset": 0, 00:09:53.238 "data_size": 65536 00:09:53.238 }, 00:09:53.238 { 00:09:53.238 "name": null, 00:09:53.238 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:53.238 "is_configured": false, 00:09:53.238 "data_offset": 0, 00:09:53.238 "data_size": 65536 00:09:53.238 } 00:09:53.238 ] 00:09:53.238 }' 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.238 12:35:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 [2024-12-14 12:35:53.287795] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.808 "name": "Existed_Raid", 00:09:53.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.808 "strip_size_kb": 0, 00:09:53.808 "state": "configuring", 00:09:53.808 "raid_level": "raid1", 00:09:53.808 "superblock": false, 00:09:53.808 "num_base_bdevs": 3, 00:09:53.808 "num_base_bdevs_discovered": 2, 00:09:53.808 "num_base_bdevs_operational": 3, 00:09:53.808 "base_bdevs_list": [ 00:09:53.808 { 00:09:53.808 "name": "BaseBdev1", 00:09:53.808 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:53.808 "is_configured": true, 00:09:53.808 "data_offset": 0, 00:09:53.808 "data_size": 65536 00:09:53.808 }, 00:09:53.808 { 00:09:53.808 "name": null, 00:09:53.808 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:53.808 "is_configured": false, 00:09:53.808 "data_offset": 0, 00:09:53.808 "data_size": 65536 00:09:53.808 }, 00:09:53.808 { 00:09:53.808 "name": "BaseBdev3", 00:09:53.808 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:53.808 "is_configured": true, 00:09:53.808 "data_offset": 0, 00:09:53.808 "data_size": 65536 00:09:53.808 } 00:09:53.808 ] 00:09:53.808 }' 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.808 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.067 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.067 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:54.067 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.067 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.067 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.326 [2024-12-14 12:35:53.810897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.326 "name": "Existed_Raid", 00:09:54.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.326 "strip_size_kb": 0, 00:09:54.326 "state": "configuring", 00:09:54.326 "raid_level": "raid1", 00:09:54.326 "superblock": false, 00:09:54.326 "num_base_bdevs": 3, 00:09:54.326 "num_base_bdevs_discovered": 1, 00:09:54.326 "num_base_bdevs_operational": 3, 00:09:54.326 "base_bdevs_list": [ 00:09:54.326 { 00:09:54.326 "name": null, 00:09:54.326 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:54.326 "is_configured": false, 00:09:54.326 "data_offset": 0, 00:09:54.326 "data_size": 65536 00:09:54.326 }, 00:09:54.326 { 00:09:54.326 "name": null, 00:09:54.326 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:54.326 "is_configured": false, 00:09:54.326 "data_offset": 0, 00:09:54.326 "data_size": 65536 00:09:54.326 }, 00:09:54.326 { 00:09:54.326 "name": "BaseBdev3", 00:09:54.326 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:54.326 "is_configured": true, 00:09:54.326 "data_offset": 0, 00:09:54.326 "data_size": 65536 00:09:54.326 } 00:09:54.326 ] 00:09:54.326 }' 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.326 12:35:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.896 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.896 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.896 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.896 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.896 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.896 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:54.896 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 [2024-12-14 12:35:54.361309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.897 "name": "Existed_Raid", 00:09:54.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.897 "strip_size_kb": 0, 00:09:54.897 "state": "configuring", 00:09:54.897 "raid_level": "raid1", 00:09:54.897 "superblock": false, 00:09:54.897 "num_base_bdevs": 3, 00:09:54.897 "num_base_bdevs_discovered": 2, 00:09:54.897 "num_base_bdevs_operational": 3, 00:09:54.897 "base_bdevs_list": [ 00:09:54.897 { 00:09:54.897 "name": null, 00:09:54.897 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:54.897 "is_configured": false, 00:09:54.897 "data_offset": 0, 00:09:54.897 "data_size": 65536 00:09:54.897 }, 00:09:54.897 { 00:09:54.897 "name": "BaseBdev2", 00:09:54.897 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:54.897 "is_configured": true, 00:09:54.897 "data_offset": 0, 00:09:54.897 "data_size": 65536 00:09:54.897 }, 00:09:54.897 { 00:09:54.897 "name": "BaseBdev3", 00:09:54.897 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:54.897 "is_configured": true, 00:09:54.897 "data_offset": 0, 00:09:54.897 "data_size": 65536 00:09:54.897 } 00:09:54.897 ] 00:09:54.897 }' 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.897 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:55.156 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8fb19dfe-b79e-4d3e-8d9d-1804a0776485 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.416 [2024-12-14 12:35:54.963628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:55.416 [2024-12-14 12:35:54.963681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:55.416 [2024-12-14 12:35:54.963689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:55.416 [2024-12-14 12:35:54.963945] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:55.416 [2024-12-14 12:35:54.964123] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:55.416 [2024-12-14 12:35:54.964136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:55.416 [2024-12-14 12:35:54.964369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:55.416 NewBaseBdev 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.416 12:35:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.416 [ 00:09:55.416 { 00:09:55.416 "name": "NewBaseBdev", 00:09:55.416 "aliases": [ 00:09:55.416 "8fb19dfe-b79e-4d3e-8d9d-1804a0776485" 00:09:55.416 ], 00:09:55.416 "product_name": "Malloc disk", 00:09:55.416 "block_size": 512, 00:09:55.416 "num_blocks": 65536, 00:09:55.416 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:55.416 "assigned_rate_limits": { 00:09:55.416 "rw_ios_per_sec": 0, 00:09:55.416 "rw_mbytes_per_sec": 0, 00:09:55.416 "r_mbytes_per_sec": 0, 00:09:55.416 "w_mbytes_per_sec": 0 00:09:55.416 }, 00:09:55.416 "claimed": true, 00:09:55.416 "claim_type": "exclusive_write", 00:09:55.416 "zoned": false, 00:09:55.416 "supported_io_types": { 00:09:55.416 "read": true, 00:09:55.416 "write": true, 00:09:55.416 "unmap": true, 00:09:55.416 "flush": true, 00:09:55.416 "reset": true, 00:09:55.416 "nvme_admin": false, 00:09:55.416 "nvme_io": false, 00:09:55.416 "nvme_io_md": false, 00:09:55.416 "write_zeroes": true, 00:09:55.417 "zcopy": true, 00:09:55.417 "get_zone_info": false, 00:09:55.417 "zone_management": false, 00:09:55.417 "zone_append": false, 00:09:55.417 "compare": false, 00:09:55.417 "compare_and_write": false, 00:09:55.417 "abort": true, 00:09:55.417 "seek_hole": false, 00:09:55.417 "seek_data": false, 00:09:55.417 "copy": true, 00:09:55.417 "nvme_iov_md": false 00:09:55.417 }, 00:09:55.417 "memory_domains": [ 00:09:55.417 { 00:09:55.417 "dma_device_id": "system", 00:09:55.417 "dma_device_type": 1 00:09:55.417 }, 00:09:55.417 { 00:09:55.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.417 "dma_device_type": 2 00:09:55.417 } 00:09:55.417 ], 00:09:55.417 "driver_specific": {} 00:09:55.417 } 00:09:55.417 ] 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.417 "name": "Existed_Raid", 00:09:55.417 "uuid": "d736b5db-f363-403e-969d-792df134901b", 00:09:55.417 "strip_size_kb": 0, 00:09:55.417 "state": "online", 00:09:55.417 "raid_level": "raid1", 00:09:55.417 "superblock": false, 00:09:55.417 "num_base_bdevs": 3, 00:09:55.417 "num_base_bdevs_discovered": 3, 00:09:55.417 "num_base_bdevs_operational": 3, 00:09:55.417 "base_bdevs_list": [ 00:09:55.417 { 00:09:55.417 "name": "NewBaseBdev", 00:09:55.417 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:55.417 "is_configured": true, 00:09:55.417 "data_offset": 0, 00:09:55.417 "data_size": 65536 00:09:55.417 }, 00:09:55.417 { 00:09:55.417 "name": "BaseBdev2", 00:09:55.417 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:55.417 "is_configured": true, 00:09:55.417 "data_offset": 0, 00:09:55.417 "data_size": 65536 00:09:55.417 }, 00:09:55.417 { 00:09:55.417 "name": "BaseBdev3", 00:09:55.417 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:55.417 "is_configured": true, 00:09:55.417 "data_offset": 0, 00:09:55.417 "data_size": 65536 00:09:55.417 } 00:09:55.417 ] 00:09:55.417 }' 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.417 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.674 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.675 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.675 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.675 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.675 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.675 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.933 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.933 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.933 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.933 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.933 [2024-12-14 12:35:55.419267] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.933 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.933 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.933 "name": "Existed_Raid", 00:09:55.933 "aliases": [ 00:09:55.933 "d736b5db-f363-403e-969d-792df134901b" 00:09:55.933 ], 00:09:55.933 "product_name": "Raid Volume", 00:09:55.933 "block_size": 512, 00:09:55.933 "num_blocks": 65536, 00:09:55.933 "uuid": "d736b5db-f363-403e-969d-792df134901b", 00:09:55.933 "assigned_rate_limits": { 00:09:55.933 "rw_ios_per_sec": 0, 00:09:55.933 "rw_mbytes_per_sec": 0, 00:09:55.933 "r_mbytes_per_sec": 0, 00:09:55.933 "w_mbytes_per_sec": 0 00:09:55.933 }, 00:09:55.933 "claimed": false, 00:09:55.933 "zoned": false, 00:09:55.933 "supported_io_types": { 00:09:55.933 "read": true, 00:09:55.933 "write": true, 00:09:55.933 "unmap": false, 00:09:55.933 "flush": false, 00:09:55.933 "reset": true, 00:09:55.933 "nvme_admin": false, 00:09:55.933 "nvme_io": false, 00:09:55.933 "nvme_io_md": false, 00:09:55.933 "write_zeroes": true, 00:09:55.933 "zcopy": false, 00:09:55.933 "get_zone_info": false, 00:09:55.933 "zone_management": false, 00:09:55.933 "zone_append": false, 00:09:55.933 "compare": false, 00:09:55.933 "compare_and_write": false, 00:09:55.933 "abort": false, 00:09:55.933 "seek_hole": false, 00:09:55.933 "seek_data": false, 00:09:55.933 "copy": false, 00:09:55.933 "nvme_iov_md": false 00:09:55.933 }, 00:09:55.933 "memory_domains": [ 00:09:55.933 { 00:09:55.933 "dma_device_id": "system", 00:09:55.933 "dma_device_type": 1 00:09:55.933 }, 00:09:55.933 { 00:09:55.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.933 "dma_device_type": 2 00:09:55.933 }, 00:09:55.933 { 00:09:55.933 "dma_device_id": "system", 00:09:55.933 "dma_device_type": 1 00:09:55.933 }, 00:09:55.933 { 00:09:55.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.933 "dma_device_type": 2 00:09:55.933 }, 00:09:55.933 { 00:09:55.933 "dma_device_id": "system", 00:09:55.933 "dma_device_type": 1 00:09:55.933 }, 00:09:55.933 { 00:09:55.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.933 "dma_device_type": 2 00:09:55.933 } 00:09:55.933 ], 00:09:55.933 "driver_specific": { 00:09:55.933 "raid": { 00:09:55.933 "uuid": "d736b5db-f363-403e-969d-792df134901b", 00:09:55.933 "strip_size_kb": 0, 00:09:55.933 "state": "online", 00:09:55.933 "raid_level": "raid1", 00:09:55.933 "superblock": false, 00:09:55.933 "num_base_bdevs": 3, 00:09:55.933 "num_base_bdevs_discovered": 3, 00:09:55.933 "num_base_bdevs_operational": 3, 00:09:55.933 "base_bdevs_list": [ 00:09:55.933 { 00:09:55.933 "name": "NewBaseBdev", 00:09:55.933 "uuid": "8fb19dfe-b79e-4d3e-8d9d-1804a0776485", 00:09:55.933 "is_configured": true, 00:09:55.933 "data_offset": 0, 00:09:55.933 "data_size": 65536 00:09:55.933 }, 00:09:55.933 { 00:09:55.933 "name": "BaseBdev2", 00:09:55.933 "uuid": "3dc50f40-e0f7-4a5b-8938-cc270304c1d6", 00:09:55.933 "is_configured": true, 00:09:55.933 "data_offset": 0, 00:09:55.933 "data_size": 65536 00:09:55.933 }, 00:09:55.933 { 00:09:55.933 "name": "BaseBdev3", 00:09:55.933 "uuid": "2d2385c5-770e-40c1-b884-8a6a97d1f90a", 00:09:55.933 "is_configured": true, 00:09:55.933 "data_offset": 0, 00:09:55.934 "data_size": 65536 00:09:55.934 } 00:09:55.934 ] 00:09:55.934 } 00:09:55.934 } 00:09:55.934 }' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:55.934 BaseBdev2 00:09:55.934 BaseBdev3' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.934 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.194 [2024-12-14 12:35:55.698433] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.194 [2024-12-14 12:35:55.698509] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:56.194 [2024-12-14 12:35:55.698613] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:56.194 [2024-12-14 12:35:55.698969] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:56.194 [2024-12-14 12:35:55.699028] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69191 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69191 ']' 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69191 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69191 00:09:56.194 killing process with pid 69191 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69191' 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69191 00:09:56.194 [2024-12-14 12:35:55.747473] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:56.194 12:35:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69191 00:09:56.453 [2024-12-14 12:35:56.047088] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.833 12:35:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:57.833 00:09:57.833 real 0m10.552s 00:09:57.833 user 0m16.814s 00:09:57.833 sys 0m1.777s 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.834 ************************************ 00:09:57.834 END TEST raid_state_function_test 00:09:57.834 ************************************ 00:09:57.834 12:35:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:57.834 12:35:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.834 12:35:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.834 12:35:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.834 ************************************ 00:09:57.834 START TEST raid_state_function_test_sb 00:09:57.834 ************************************ 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69813 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69813' 00:09:57.834 Process raid pid: 69813 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69813 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69813 ']' 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.834 12:35:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.834 [2024-12-14 12:35:57.335309] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:57.834 [2024-12-14 12:35:57.335430] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.834 [2024-12-14 12:35:57.490566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:58.096 [2024-12-14 12:35:57.604034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.096 [2024-12-14 12:35:57.815593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.096 [2024-12-14 12:35:57.815637] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.673 [2024-12-14 12:35:58.174531] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.673 [2024-12-14 12:35:58.174583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.673 [2024-12-14 12:35:58.174594] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.673 [2024-12-14 12:35:58.174619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.673 [2024-12-14 12:35:58.174631] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.673 [2024-12-14 12:35:58.174640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.673 "name": "Existed_Raid", 00:09:58.673 "uuid": "b10b8e0a-78c4-4cc7-8341-44fa61b4aed1", 00:09:58.673 "strip_size_kb": 0, 00:09:58.673 "state": "configuring", 00:09:58.673 "raid_level": "raid1", 00:09:58.673 "superblock": true, 00:09:58.673 "num_base_bdevs": 3, 00:09:58.673 "num_base_bdevs_discovered": 0, 00:09:58.673 "num_base_bdevs_operational": 3, 00:09:58.673 "base_bdevs_list": [ 00:09:58.673 { 00:09:58.673 "name": "BaseBdev1", 00:09:58.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.673 "is_configured": false, 00:09:58.673 "data_offset": 0, 00:09:58.673 "data_size": 0 00:09:58.673 }, 00:09:58.673 { 00:09:58.673 "name": "BaseBdev2", 00:09:58.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.673 "is_configured": false, 00:09:58.673 "data_offset": 0, 00:09:58.673 "data_size": 0 00:09:58.673 }, 00:09:58.673 { 00:09:58.673 "name": "BaseBdev3", 00:09:58.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.673 "is_configured": false, 00:09:58.673 "data_offset": 0, 00:09:58.673 "data_size": 0 00:09:58.673 } 00:09:58.673 ] 00:09:58.673 }' 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.673 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.932 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.932 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.932 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.932 [2024-12-14 12:35:58.625767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.932 [2024-12-14 12:35:58.625806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:58.932 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.933 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.933 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.933 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.933 [2024-12-14 12:35:58.637744] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.933 [2024-12-14 12:35:58.637790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.933 [2024-12-14 12:35:58.637799] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.933 [2024-12-14 12:35:58.637808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.933 [2024-12-14 12:35:58.637814] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.933 [2024-12-14 12:35:58.637823] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.933 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.933 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.933 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.933 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.192 [2024-12-14 12:35:58.685923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.192 BaseBdev1 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.192 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.192 [ 00:09:59.192 { 00:09:59.192 "name": "BaseBdev1", 00:09:59.192 "aliases": [ 00:09:59.192 "20d1fe02-694e-4bac-a8ca-8e8efc428586" 00:09:59.192 ], 00:09:59.192 "product_name": "Malloc disk", 00:09:59.192 "block_size": 512, 00:09:59.192 "num_blocks": 65536, 00:09:59.192 "uuid": "20d1fe02-694e-4bac-a8ca-8e8efc428586", 00:09:59.192 "assigned_rate_limits": { 00:09:59.192 "rw_ios_per_sec": 0, 00:09:59.192 "rw_mbytes_per_sec": 0, 00:09:59.192 "r_mbytes_per_sec": 0, 00:09:59.192 "w_mbytes_per_sec": 0 00:09:59.192 }, 00:09:59.192 "claimed": true, 00:09:59.192 "claim_type": "exclusive_write", 00:09:59.192 "zoned": false, 00:09:59.192 "supported_io_types": { 00:09:59.192 "read": true, 00:09:59.192 "write": true, 00:09:59.192 "unmap": true, 00:09:59.192 "flush": true, 00:09:59.192 "reset": true, 00:09:59.192 "nvme_admin": false, 00:09:59.192 "nvme_io": false, 00:09:59.192 "nvme_io_md": false, 00:09:59.192 "write_zeroes": true, 00:09:59.192 "zcopy": true, 00:09:59.192 "get_zone_info": false, 00:09:59.193 "zone_management": false, 00:09:59.193 "zone_append": false, 00:09:59.193 "compare": false, 00:09:59.193 "compare_and_write": false, 00:09:59.193 "abort": true, 00:09:59.193 "seek_hole": false, 00:09:59.193 "seek_data": false, 00:09:59.193 "copy": true, 00:09:59.193 "nvme_iov_md": false 00:09:59.193 }, 00:09:59.193 "memory_domains": [ 00:09:59.193 { 00:09:59.193 "dma_device_id": "system", 00:09:59.193 "dma_device_type": 1 00:09:59.193 }, 00:09:59.193 { 00:09:59.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.193 "dma_device_type": 2 00:09:59.193 } 00:09:59.193 ], 00:09:59.193 "driver_specific": {} 00:09:59.193 } 00:09:59.193 ] 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.193 "name": "Existed_Raid", 00:09:59.193 "uuid": "f6dd3f15-7ead-43f3-a36f-52ff143ba91e", 00:09:59.193 "strip_size_kb": 0, 00:09:59.193 "state": "configuring", 00:09:59.193 "raid_level": "raid1", 00:09:59.193 "superblock": true, 00:09:59.193 "num_base_bdevs": 3, 00:09:59.193 "num_base_bdevs_discovered": 1, 00:09:59.193 "num_base_bdevs_operational": 3, 00:09:59.193 "base_bdevs_list": [ 00:09:59.193 { 00:09:59.193 "name": "BaseBdev1", 00:09:59.193 "uuid": "20d1fe02-694e-4bac-a8ca-8e8efc428586", 00:09:59.193 "is_configured": true, 00:09:59.193 "data_offset": 2048, 00:09:59.193 "data_size": 63488 00:09:59.193 }, 00:09:59.193 { 00:09:59.193 "name": "BaseBdev2", 00:09:59.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.193 "is_configured": false, 00:09:59.193 "data_offset": 0, 00:09:59.193 "data_size": 0 00:09:59.193 }, 00:09:59.193 { 00:09:59.193 "name": "BaseBdev3", 00:09:59.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.193 "is_configured": false, 00:09:59.193 "data_offset": 0, 00:09:59.193 "data_size": 0 00:09:59.193 } 00:09:59.193 ] 00:09:59.193 }' 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.193 12:35:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 [2024-12-14 12:35:59.161186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.452 [2024-12-14 12:35:59.161362] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.452 [2024-12-14 12:35:59.173232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.452 [2024-12-14 12:35:59.175210] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.452 [2024-12-14 12:35:59.175257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.452 [2024-12-14 12:35:59.175268] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.452 [2024-12-14 12:35:59.175278] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.452 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.453 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.713 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.713 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.713 "name": "Existed_Raid", 00:09:59.713 "uuid": "9c8dee41-e1d9-41e9-acd9-6443c8bb2afc", 00:09:59.713 "strip_size_kb": 0, 00:09:59.713 "state": "configuring", 00:09:59.713 "raid_level": "raid1", 00:09:59.713 "superblock": true, 00:09:59.713 "num_base_bdevs": 3, 00:09:59.713 "num_base_bdevs_discovered": 1, 00:09:59.713 "num_base_bdevs_operational": 3, 00:09:59.713 "base_bdevs_list": [ 00:09:59.713 { 00:09:59.713 "name": "BaseBdev1", 00:09:59.713 "uuid": "20d1fe02-694e-4bac-a8ca-8e8efc428586", 00:09:59.713 "is_configured": true, 00:09:59.713 "data_offset": 2048, 00:09:59.713 "data_size": 63488 00:09:59.713 }, 00:09:59.713 { 00:09:59.713 "name": "BaseBdev2", 00:09:59.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.713 "is_configured": false, 00:09:59.713 "data_offset": 0, 00:09:59.713 "data_size": 0 00:09:59.713 }, 00:09:59.713 { 00:09:59.713 "name": "BaseBdev3", 00:09:59.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.713 "is_configured": false, 00:09:59.713 "data_offset": 0, 00:09:59.713 "data_size": 0 00:09:59.713 } 00:09:59.713 ] 00:09:59.713 }' 00:09:59.713 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.713 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.973 [2024-12-14 12:35:59.648303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.973 BaseBdev2 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.973 [ 00:09:59.973 { 00:09:59.973 "name": "BaseBdev2", 00:09:59.973 "aliases": [ 00:09:59.973 "28958628-ffab-4164-8be1-5e3fd1928d34" 00:09:59.973 ], 00:09:59.973 "product_name": "Malloc disk", 00:09:59.973 "block_size": 512, 00:09:59.973 "num_blocks": 65536, 00:09:59.973 "uuid": "28958628-ffab-4164-8be1-5e3fd1928d34", 00:09:59.973 "assigned_rate_limits": { 00:09:59.973 "rw_ios_per_sec": 0, 00:09:59.973 "rw_mbytes_per_sec": 0, 00:09:59.973 "r_mbytes_per_sec": 0, 00:09:59.973 "w_mbytes_per_sec": 0 00:09:59.973 }, 00:09:59.973 "claimed": true, 00:09:59.973 "claim_type": "exclusive_write", 00:09:59.973 "zoned": false, 00:09:59.973 "supported_io_types": { 00:09:59.973 "read": true, 00:09:59.973 "write": true, 00:09:59.973 "unmap": true, 00:09:59.973 "flush": true, 00:09:59.973 "reset": true, 00:09:59.973 "nvme_admin": false, 00:09:59.973 "nvme_io": false, 00:09:59.973 "nvme_io_md": false, 00:09:59.973 "write_zeroes": true, 00:09:59.973 "zcopy": true, 00:09:59.973 "get_zone_info": false, 00:09:59.973 "zone_management": false, 00:09:59.973 "zone_append": false, 00:09:59.973 "compare": false, 00:09:59.973 "compare_and_write": false, 00:09:59.973 "abort": true, 00:09:59.973 "seek_hole": false, 00:09:59.973 "seek_data": false, 00:09:59.973 "copy": true, 00:09:59.973 "nvme_iov_md": false 00:09:59.973 }, 00:09:59.973 "memory_domains": [ 00:09:59.973 { 00:09:59.973 "dma_device_id": "system", 00:09:59.973 "dma_device_type": 1 00:09:59.973 }, 00:09:59.973 { 00:09:59.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.973 "dma_device_type": 2 00:09:59.973 } 00:09:59.973 ], 00:09:59.973 "driver_specific": {} 00:09:59.973 } 00:09:59.973 ] 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.973 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.233 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.233 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.233 "name": "Existed_Raid", 00:10:00.233 "uuid": "9c8dee41-e1d9-41e9-acd9-6443c8bb2afc", 00:10:00.233 "strip_size_kb": 0, 00:10:00.233 "state": "configuring", 00:10:00.233 "raid_level": "raid1", 00:10:00.233 "superblock": true, 00:10:00.233 "num_base_bdevs": 3, 00:10:00.233 "num_base_bdevs_discovered": 2, 00:10:00.233 "num_base_bdevs_operational": 3, 00:10:00.233 "base_bdevs_list": [ 00:10:00.233 { 00:10:00.233 "name": "BaseBdev1", 00:10:00.233 "uuid": "20d1fe02-694e-4bac-a8ca-8e8efc428586", 00:10:00.233 "is_configured": true, 00:10:00.233 "data_offset": 2048, 00:10:00.233 "data_size": 63488 00:10:00.233 }, 00:10:00.233 { 00:10:00.233 "name": "BaseBdev2", 00:10:00.233 "uuid": "28958628-ffab-4164-8be1-5e3fd1928d34", 00:10:00.233 "is_configured": true, 00:10:00.233 "data_offset": 2048, 00:10:00.233 "data_size": 63488 00:10:00.233 }, 00:10:00.233 { 00:10:00.233 "name": "BaseBdev3", 00:10:00.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.233 "is_configured": false, 00:10:00.233 "data_offset": 0, 00:10:00.233 "data_size": 0 00:10:00.233 } 00:10:00.233 ] 00:10:00.233 }' 00:10:00.233 12:35:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.233 12:35:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.492 [2024-12-14 12:36:00.187372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.492 [2024-12-14 12:36:00.187723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.492 [2024-12-14 12:36:00.187748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:00.492 [2024-12-14 12:36:00.188014] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:00.492 [2024-12-14 12:36:00.188202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.492 [2024-12-14 12:36:00.188213] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.492 BaseBdev3 00:10:00.492 [2024-12-14 12:36:00.188371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.492 [ 00:10:00.492 { 00:10:00.492 "name": "BaseBdev3", 00:10:00.492 "aliases": [ 00:10:00.492 "6629687e-78ab-46b0-9516-4d28cdbfe853" 00:10:00.492 ], 00:10:00.492 "product_name": "Malloc disk", 00:10:00.492 "block_size": 512, 00:10:00.492 "num_blocks": 65536, 00:10:00.492 "uuid": "6629687e-78ab-46b0-9516-4d28cdbfe853", 00:10:00.492 "assigned_rate_limits": { 00:10:00.492 "rw_ios_per_sec": 0, 00:10:00.492 "rw_mbytes_per_sec": 0, 00:10:00.492 "r_mbytes_per_sec": 0, 00:10:00.492 "w_mbytes_per_sec": 0 00:10:00.492 }, 00:10:00.492 "claimed": true, 00:10:00.492 "claim_type": "exclusive_write", 00:10:00.492 "zoned": false, 00:10:00.492 "supported_io_types": { 00:10:00.492 "read": true, 00:10:00.492 "write": true, 00:10:00.492 "unmap": true, 00:10:00.492 "flush": true, 00:10:00.492 "reset": true, 00:10:00.492 "nvme_admin": false, 00:10:00.492 "nvme_io": false, 00:10:00.492 "nvme_io_md": false, 00:10:00.492 "write_zeroes": true, 00:10:00.492 "zcopy": true, 00:10:00.492 "get_zone_info": false, 00:10:00.492 "zone_management": false, 00:10:00.492 "zone_append": false, 00:10:00.492 "compare": false, 00:10:00.492 "compare_and_write": false, 00:10:00.492 "abort": true, 00:10:00.492 "seek_hole": false, 00:10:00.492 "seek_data": false, 00:10:00.492 "copy": true, 00:10:00.492 "nvme_iov_md": false 00:10:00.492 }, 00:10:00.492 "memory_domains": [ 00:10:00.492 { 00:10:00.492 "dma_device_id": "system", 00:10:00.492 "dma_device_type": 1 00:10:00.492 }, 00:10:00.492 { 00:10:00.492 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.492 "dma_device_type": 2 00:10:00.492 } 00:10:00.492 ], 00:10:00.492 "driver_specific": {} 00:10:00.492 } 00:10:00.492 ] 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.492 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.752 "name": "Existed_Raid", 00:10:00.752 "uuid": "9c8dee41-e1d9-41e9-acd9-6443c8bb2afc", 00:10:00.752 "strip_size_kb": 0, 00:10:00.752 "state": "online", 00:10:00.752 "raid_level": "raid1", 00:10:00.752 "superblock": true, 00:10:00.752 "num_base_bdevs": 3, 00:10:00.752 "num_base_bdevs_discovered": 3, 00:10:00.752 "num_base_bdevs_operational": 3, 00:10:00.752 "base_bdevs_list": [ 00:10:00.752 { 00:10:00.752 "name": "BaseBdev1", 00:10:00.752 "uuid": "20d1fe02-694e-4bac-a8ca-8e8efc428586", 00:10:00.752 "is_configured": true, 00:10:00.752 "data_offset": 2048, 00:10:00.752 "data_size": 63488 00:10:00.752 }, 00:10:00.752 { 00:10:00.752 "name": "BaseBdev2", 00:10:00.752 "uuid": "28958628-ffab-4164-8be1-5e3fd1928d34", 00:10:00.752 "is_configured": true, 00:10:00.752 "data_offset": 2048, 00:10:00.752 "data_size": 63488 00:10:00.752 }, 00:10:00.752 { 00:10:00.752 "name": "BaseBdev3", 00:10:00.752 "uuid": "6629687e-78ab-46b0-9516-4d28cdbfe853", 00:10:00.752 "is_configured": true, 00:10:00.752 "data_offset": 2048, 00:10:00.752 "data_size": 63488 00:10:00.752 } 00:10:00.752 ] 00:10:00.752 }' 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.752 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.012 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.012 [2024-12-14 12:36:00.726850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.271 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.271 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.271 "name": "Existed_Raid", 00:10:01.271 "aliases": [ 00:10:01.271 "9c8dee41-e1d9-41e9-acd9-6443c8bb2afc" 00:10:01.271 ], 00:10:01.271 "product_name": "Raid Volume", 00:10:01.271 "block_size": 512, 00:10:01.271 "num_blocks": 63488, 00:10:01.271 "uuid": "9c8dee41-e1d9-41e9-acd9-6443c8bb2afc", 00:10:01.271 "assigned_rate_limits": { 00:10:01.271 "rw_ios_per_sec": 0, 00:10:01.271 "rw_mbytes_per_sec": 0, 00:10:01.271 "r_mbytes_per_sec": 0, 00:10:01.271 "w_mbytes_per_sec": 0 00:10:01.271 }, 00:10:01.271 "claimed": false, 00:10:01.271 "zoned": false, 00:10:01.271 "supported_io_types": { 00:10:01.271 "read": true, 00:10:01.271 "write": true, 00:10:01.271 "unmap": false, 00:10:01.271 "flush": false, 00:10:01.271 "reset": true, 00:10:01.271 "nvme_admin": false, 00:10:01.271 "nvme_io": false, 00:10:01.271 "nvme_io_md": false, 00:10:01.271 "write_zeroes": true, 00:10:01.271 "zcopy": false, 00:10:01.271 "get_zone_info": false, 00:10:01.271 "zone_management": false, 00:10:01.271 "zone_append": false, 00:10:01.271 "compare": false, 00:10:01.271 "compare_and_write": false, 00:10:01.271 "abort": false, 00:10:01.271 "seek_hole": false, 00:10:01.271 "seek_data": false, 00:10:01.271 "copy": false, 00:10:01.271 "nvme_iov_md": false 00:10:01.271 }, 00:10:01.271 "memory_domains": [ 00:10:01.271 { 00:10:01.271 "dma_device_id": "system", 00:10:01.271 "dma_device_type": 1 00:10:01.271 }, 00:10:01.271 { 00:10:01.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.271 "dma_device_type": 2 00:10:01.271 }, 00:10:01.271 { 00:10:01.271 "dma_device_id": "system", 00:10:01.271 "dma_device_type": 1 00:10:01.271 }, 00:10:01.271 { 00:10:01.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.271 "dma_device_type": 2 00:10:01.271 }, 00:10:01.271 { 00:10:01.271 "dma_device_id": "system", 00:10:01.271 "dma_device_type": 1 00:10:01.271 }, 00:10:01.271 { 00:10:01.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.271 "dma_device_type": 2 00:10:01.271 } 00:10:01.272 ], 00:10:01.272 "driver_specific": { 00:10:01.272 "raid": { 00:10:01.272 "uuid": "9c8dee41-e1d9-41e9-acd9-6443c8bb2afc", 00:10:01.272 "strip_size_kb": 0, 00:10:01.272 "state": "online", 00:10:01.272 "raid_level": "raid1", 00:10:01.272 "superblock": true, 00:10:01.272 "num_base_bdevs": 3, 00:10:01.272 "num_base_bdevs_discovered": 3, 00:10:01.272 "num_base_bdevs_operational": 3, 00:10:01.272 "base_bdevs_list": [ 00:10:01.272 { 00:10:01.272 "name": "BaseBdev1", 00:10:01.272 "uuid": "20d1fe02-694e-4bac-a8ca-8e8efc428586", 00:10:01.272 "is_configured": true, 00:10:01.272 "data_offset": 2048, 00:10:01.272 "data_size": 63488 00:10:01.272 }, 00:10:01.272 { 00:10:01.272 "name": "BaseBdev2", 00:10:01.272 "uuid": "28958628-ffab-4164-8be1-5e3fd1928d34", 00:10:01.272 "is_configured": true, 00:10:01.272 "data_offset": 2048, 00:10:01.272 "data_size": 63488 00:10:01.272 }, 00:10:01.272 { 00:10:01.272 "name": "BaseBdev3", 00:10:01.272 "uuid": "6629687e-78ab-46b0-9516-4d28cdbfe853", 00:10:01.272 "is_configured": true, 00:10:01.272 "data_offset": 2048, 00:10:01.272 "data_size": 63488 00:10:01.272 } 00:10:01.272 ] 00:10:01.272 } 00:10:01.272 } 00:10:01.272 }' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.272 BaseBdev2 00:10:01.272 BaseBdev3' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.272 12:36:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.531 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.531 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.531 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.531 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.531 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.531 [2024-12-14 12:36:01.018240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.531 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.532 "name": "Existed_Raid", 00:10:01.532 "uuid": "9c8dee41-e1d9-41e9-acd9-6443c8bb2afc", 00:10:01.532 "strip_size_kb": 0, 00:10:01.532 "state": "online", 00:10:01.532 "raid_level": "raid1", 00:10:01.532 "superblock": true, 00:10:01.532 "num_base_bdevs": 3, 00:10:01.532 "num_base_bdevs_discovered": 2, 00:10:01.532 "num_base_bdevs_operational": 2, 00:10:01.532 "base_bdevs_list": [ 00:10:01.532 { 00:10:01.532 "name": null, 00:10:01.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.532 "is_configured": false, 00:10:01.532 "data_offset": 0, 00:10:01.532 "data_size": 63488 00:10:01.532 }, 00:10:01.532 { 00:10:01.532 "name": "BaseBdev2", 00:10:01.532 "uuid": "28958628-ffab-4164-8be1-5e3fd1928d34", 00:10:01.532 "is_configured": true, 00:10:01.532 "data_offset": 2048, 00:10:01.532 "data_size": 63488 00:10:01.532 }, 00:10:01.532 { 00:10:01.532 "name": "BaseBdev3", 00:10:01.532 "uuid": "6629687e-78ab-46b0-9516-4d28cdbfe853", 00:10:01.532 "is_configured": true, 00:10:01.532 "data_offset": 2048, 00:10:01.532 "data_size": 63488 00:10:01.532 } 00:10:01.532 ] 00:10:01.532 }' 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.532 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.099 [2024-12-14 12:36:01.619767] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.099 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.100 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.100 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.100 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.100 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.100 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:02.100 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.100 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.100 [2024-12-14 12:36:01.778827] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.100 [2024-12-14 12:36:01.779011] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.360 [2024-12-14 12:36:01.876401] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.360 [2024-12-14 12:36:01.876453] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.360 [2024-12-14 12:36:01.876466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.360 BaseBdev2 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.360 12:36:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.360 [ 00:10:02.360 { 00:10:02.360 "name": "BaseBdev2", 00:10:02.360 "aliases": [ 00:10:02.360 "cabfeb84-c68f-40d9-b67d-fc1220a0d6af" 00:10:02.360 ], 00:10:02.360 "product_name": "Malloc disk", 00:10:02.360 "block_size": 512, 00:10:02.360 "num_blocks": 65536, 00:10:02.360 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:02.360 "assigned_rate_limits": { 00:10:02.360 "rw_ios_per_sec": 0, 00:10:02.360 "rw_mbytes_per_sec": 0, 00:10:02.360 "r_mbytes_per_sec": 0, 00:10:02.360 "w_mbytes_per_sec": 0 00:10:02.360 }, 00:10:02.360 "claimed": false, 00:10:02.360 "zoned": false, 00:10:02.360 "supported_io_types": { 00:10:02.360 "read": true, 00:10:02.360 "write": true, 00:10:02.360 "unmap": true, 00:10:02.360 "flush": true, 00:10:02.360 "reset": true, 00:10:02.360 "nvme_admin": false, 00:10:02.360 "nvme_io": false, 00:10:02.360 "nvme_io_md": false, 00:10:02.360 "write_zeroes": true, 00:10:02.360 "zcopy": true, 00:10:02.360 "get_zone_info": false, 00:10:02.360 "zone_management": false, 00:10:02.360 "zone_append": false, 00:10:02.360 "compare": false, 00:10:02.360 "compare_and_write": false, 00:10:02.360 "abort": true, 00:10:02.360 "seek_hole": false, 00:10:02.360 "seek_data": false, 00:10:02.360 "copy": true, 00:10:02.360 "nvme_iov_md": false 00:10:02.360 }, 00:10:02.360 "memory_domains": [ 00:10:02.360 { 00:10:02.360 "dma_device_id": "system", 00:10:02.360 "dma_device_type": 1 00:10:02.360 }, 00:10:02.360 { 00:10:02.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.360 "dma_device_type": 2 00:10:02.360 } 00:10:02.360 ], 00:10:02.360 "driver_specific": {} 00:10:02.360 } 00:10:02.360 ] 00:10:02.360 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.360 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.361 BaseBdev3 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.361 [ 00:10:02.361 { 00:10:02.361 "name": "BaseBdev3", 00:10:02.361 "aliases": [ 00:10:02.361 "64a954d9-68d9-493d-95a9-6f891f6dbc48" 00:10:02.361 ], 00:10:02.361 "product_name": "Malloc disk", 00:10:02.361 "block_size": 512, 00:10:02.361 "num_blocks": 65536, 00:10:02.361 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:02.361 "assigned_rate_limits": { 00:10:02.361 "rw_ios_per_sec": 0, 00:10:02.361 "rw_mbytes_per_sec": 0, 00:10:02.361 "r_mbytes_per_sec": 0, 00:10:02.361 "w_mbytes_per_sec": 0 00:10:02.361 }, 00:10:02.361 "claimed": false, 00:10:02.361 "zoned": false, 00:10:02.361 "supported_io_types": { 00:10:02.361 "read": true, 00:10:02.361 "write": true, 00:10:02.361 "unmap": true, 00:10:02.361 "flush": true, 00:10:02.361 "reset": true, 00:10:02.361 "nvme_admin": false, 00:10:02.361 "nvme_io": false, 00:10:02.361 "nvme_io_md": false, 00:10:02.361 "write_zeroes": true, 00:10:02.361 "zcopy": true, 00:10:02.361 "get_zone_info": false, 00:10:02.361 "zone_management": false, 00:10:02.361 "zone_append": false, 00:10:02.361 "compare": false, 00:10:02.361 "compare_and_write": false, 00:10:02.361 "abort": true, 00:10:02.361 "seek_hole": false, 00:10:02.361 "seek_data": false, 00:10:02.361 "copy": true, 00:10:02.361 "nvme_iov_md": false 00:10:02.361 }, 00:10:02.361 "memory_domains": [ 00:10:02.361 { 00:10:02.361 "dma_device_id": "system", 00:10:02.361 "dma_device_type": 1 00:10:02.361 }, 00:10:02.361 { 00:10:02.361 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.361 "dma_device_type": 2 00:10:02.361 } 00:10:02.361 ], 00:10:02.361 "driver_specific": {} 00:10:02.361 } 00:10:02.361 ] 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.361 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.621 [2024-12-14 12:36:02.096758] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.621 [2024-12-14 12:36:02.096850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.621 [2024-12-14 12:36:02.096911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.621 [2024-12-14 12:36:02.098967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.621 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.621 "name": "Existed_Raid", 00:10:02.621 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:02.621 "strip_size_kb": 0, 00:10:02.621 "state": "configuring", 00:10:02.621 "raid_level": "raid1", 00:10:02.621 "superblock": true, 00:10:02.621 "num_base_bdevs": 3, 00:10:02.622 "num_base_bdevs_discovered": 2, 00:10:02.622 "num_base_bdevs_operational": 3, 00:10:02.622 "base_bdevs_list": [ 00:10:02.622 { 00:10:02.622 "name": "BaseBdev1", 00:10:02.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.622 "is_configured": false, 00:10:02.622 "data_offset": 0, 00:10:02.622 "data_size": 0 00:10:02.622 }, 00:10:02.622 { 00:10:02.622 "name": "BaseBdev2", 00:10:02.622 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:02.622 "is_configured": true, 00:10:02.622 "data_offset": 2048, 00:10:02.622 "data_size": 63488 00:10:02.622 }, 00:10:02.622 { 00:10:02.622 "name": "BaseBdev3", 00:10:02.622 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:02.622 "is_configured": true, 00:10:02.622 "data_offset": 2048, 00:10:02.622 "data_size": 63488 00:10:02.622 } 00:10:02.622 ] 00:10:02.622 }' 00:10:02.622 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.622 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.881 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:02.881 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.881 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.881 [2024-12-14 12:36:02.579977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.882 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.141 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.141 "name": "Existed_Raid", 00:10:03.141 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:03.141 "strip_size_kb": 0, 00:10:03.141 "state": "configuring", 00:10:03.141 "raid_level": "raid1", 00:10:03.141 "superblock": true, 00:10:03.141 "num_base_bdevs": 3, 00:10:03.141 "num_base_bdevs_discovered": 1, 00:10:03.141 "num_base_bdevs_operational": 3, 00:10:03.141 "base_bdevs_list": [ 00:10:03.141 { 00:10:03.141 "name": "BaseBdev1", 00:10:03.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.141 "is_configured": false, 00:10:03.141 "data_offset": 0, 00:10:03.141 "data_size": 0 00:10:03.141 }, 00:10:03.141 { 00:10:03.141 "name": null, 00:10:03.141 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:03.141 "is_configured": false, 00:10:03.141 "data_offset": 0, 00:10:03.141 "data_size": 63488 00:10:03.141 }, 00:10:03.141 { 00:10:03.141 "name": "BaseBdev3", 00:10:03.141 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:03.141 "is_configured": true, 00:10:03.141 "data_offset": 2048, 00:10:03.141 "data_size": 63488 00:10:03.141 } 00:10:03.141 ] 00:10:03.141 }' 00:10:03.141 12:36:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.141 12:36:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.400 [2024-12-14 12:36:03.103591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.400 BaseBdev1 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.400 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.401 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.401 [ 00:10:03.401 { 00:10:03.401 "name": "BaseBdev1", 00:10:03.401 "aliases": [ 00:10:03.401 "e0f0f515-3778-4af6-81e5-90fdff7fba71" 00:10:03.401 ], 00:10:03.401 "product_name": "Malloc disk", 00:10:03.401 "block_size": 512, 00:10:03.401 "num_blocks": 65536, 00:10:03.401 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:03.401 "assigned_rate_limits": { 00:10:03.401 "rw_ios_per_sec": 0, 00:10:03.401 "rw_mbytes_per_sec": 0, 00:10:03.401 "r_mbytes_per_sec": 0, 00:10:03.401 "w_mbytes_per_sec": 0 00:10:03.401 }, 00:10:03.401 "claimed": true, 00:10:03.401 "claim_type": "exclusive_write", 00:10:03.401 "zoned": false, 00:10:03.401 "supported_io_types": { 00:10:03.401 "read": true, 00:10:03.401 "write": true, 00:10:03.401 "unmap": true, 00:10:03.401 "flush": true, 00:10:03.401 "reset": true, 00:10:03.401 "nvme_admin": false, 00:10:03.401 "nvme_io": false, 00:10:03.401 "nvme_io_md": false, 00:10:03.401 "write_zeroes": true, 00:10:03.401 "zcopy": true, 00:10:03.401 "get_zone_info": false, 00:10:03.401 "zone_management": false, 00:10:03.660 "zone_append": false, 00:10:03.660 "compare": false, 00:10:03.660 "compare_and_write": false, 00:10:03.660 "abort": true, 00:10:03.660 "seek_hole": false, 00:10:03.660 "seek_data": false, 00:10:03.660 "copy": true, 00:10:03.660 "nvme_iov_md": false 00:10:03.660 }, 00:10:03.660 "memory_domains": [ 00:10:03.660 { 00:10:03.660 "dma_device_id": "system", 00:10:03.660 "dma_device_type": 1 00:10:03.660 }, 00:10:03.660 { 00:10:03.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.660 "dma_device_type": 2 00:10:03.660 } 00:10:03.660 ], 00:10:03.660 "driver_specific": {} 00:10:03.660 } 00:10:03.660 ] 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.660 "name": "Existed_Raid", 00:10:03.660 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:03.660 "strip_size_kb": 0, 00:10:03.660 "state": "configuring", 00:10:03.660 "raid_level": "raid1", 00:10:03.660 "superblock": true, 00:10:03.660 "num_base_bdevs": 3, 00:10:03.660 "num_base_bdevs_discovered": 2, 00:10:03.660 "num_base_bdevs_operational": 3, 00:10:03.660 "base_bdevs_list": [ 00:10:03.660 { 00:10:03.660 "name": "BaseBdev1", 00:10:03.660 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:03.660 "is_configured": true, 00:10:03.660 "data_offset": 2048, 00:10:03.660 "data_size": 63488 00:10:03.660 }, 00:10:03.660 { 00:10:03.660 "name": null, 00:10:03.660 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:03.660 "is_configured": false, 00:10:03.660 "data_offset": 0, 00:10:03.660 "data_size": 63488 00:10:03.660 }, 00:10:03.660 { 00:10:03.660 "name": "BaseBdev3", 00:10:03.660 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:03.660 "is_configured": true, 00:10:03.660 "data_offset": 2048, 00:10:03.660 "data_size": 63488 00:10:03.660 } 00:10:03.660 ] 00:10:03.660 }' 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.660 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.921 [2024-12-14 12:36:03.590850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.921 "name": "Existed_Raid", 00:10:03.921 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:03.921 "strip_size_kb": 0, 00:10:03.921 "state": "configuring", 00:10:03.921 "raid_level": "raid1", 00:10:03.921 "superblock": true, 00:10:03.921 "num_base_bdevs": 3, 00:10:03.921 "num_base_bdevs_discovered": 1, 00:10:03.921 "num_base_bdevs_operational": 3, 00:10:03.921 "base_bdevs_list": [ 00:10:03.921 { 00:10:03.921 "name": "BaseBdev1", 00:10:03.921 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:03.921 "is_configured": true, 00:10:03.921 "data_offset": 2048, 00:10:03.921 "data_size": 63488 00:10:03.921 }, 00:10:03.921 { 00:10:03.921 "name": null, 00:10:03.921 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:03.921 "is_configured": false, 00:10:03.921 "data_offset": 0, 00:10:03.921 "data_size": 63488 00:10:03.921 }, 00:10:03.921 { 00:10:03.921 "name": null, 00:10:03.921 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:03.921 "is_configured": false, 00:10:03.921 "data_offset": 0, 00:10:03.921 "data_size": 63488 00:10:03.921 } 00:10:03.921 ] 00:10:03.921 }' 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.921 12:36:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.489 [2024-12-14 12:36:04.058116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.489 "name": "Existed_Raid", 00:10:04.489 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:04.489 "strip_size_kb": 0, 00:10:04.489 "state": "configuring", 00:10:04.489 "raid_level": "raid1", 00:10:04.489 "superblock": true, 00:10:04.489 "num_base_bdevs": 3, 00:10:04.489 "num_base_bdevs_discovered": 2, 00:10:04.489 "num_base_bdevs_operational": 3, 00:10:04.489 "base_bdevs_list": [ 00:10:04.489 { 00:10:04.489 "name": "BaseBdev1", 00:10:04.489 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:04.489 "is_configured": true, 00:10:04.489 "data_offset": 2048, 00:10:04.489 "data_size": 63488 00:10:04.489 }, 00:10:04.489 { 00:10:04.489 "name": null, 00:10:04.489 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:04.489 "is_configured": false, 00:10:04.489 "data_offset": 0, 00:10:04.489 "data_size": 63488 00:10:04.489 }, 00:10:04.489 { 00:10:04.489 "name": "BaseBdev3", 00:10:04.489 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:04.489 "is_configured": true, 00:10:04.489 "data_offset": 2048, 00:10:04.489 "data_size": 63488 00:10:04.489 } 00:10:04.489 ] 00:10:04.489 }' 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.489 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.058 [2024-12-14 12:36:04.561260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.058 "name": "Existed_Raid", 00:10:05.058 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:05.058 "strip_size_kb": 0, 00:10:05.058 "state": "configuring", 00:10:05.058 "raid_level": "raid1", 00:10:05.058 "superblock": true, 00:10:05.058 "num_base_bdevs": 3, 00:10:05.058 "num_base_bdevs_discovered": 1, 00:10:05.058 "num_base_bdevs_operational": 3, 00:10:05.058 "base_bdevs_list": [ 00:10:05.058 { 00:10:05.058 "name": null, 00:10:05.058 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:05.058 "is_configured": false, 00:10:05.058 "data_offset": 0, 00:10:05.058 "data_size": 63488 00:10:05.058 }, 00:10:05.058 { 00:10:05.058 "name": null, 00:10:05.058 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:05.058 "is_configured": false, 00:10:05.058 "data_offset": 0, 00:10:05.058 "data_size": 63488 00:10:05.058 }, 00:10:05.058 { 00:10:05.058 "name": "BaseBdev3", 00:10:05.058 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:05.058 "is_configured": true, 00:10:05.058 "data_offset": 2048, 00:10:05.058 "data_size": 63488 00:10:05.058 } 00:10:05.058 ] 00:10:05.058 }' 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.058 12:36:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.627 [2024-12-14 12:36:05.128309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.627 "name": "Existed_Raid", 00:10:05.627 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:05.627 "strip_size_kb": 0, 00:10:05.627 "state": "configuring", 00:10:05.627 "raid_level": "raid1", 00:10:05.627 "superblock": true, 00:10:05.627 "num_base_bdevs": 3, 00:10:05.627 "num_base_bdevs_discovered": 2, 00:10:05.627 "num_base_bdevs_operational": 3, 00:10:05.627 "base_bdevs_list": [ 00:10:05.627 { 00:10:05.627 "name": null, 00:10:05.627 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:05.627 "is_configured": false, 00:10:05.627 "data_offset": 0, 00:10:05.627 "data_size": 63488 00:10:05.627 }, 00:10:05.627 { 00:10:05.627 "name": "BaseBdev2", 00:10:05.627 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:05.627 "is_configured": true, 00:10:05.627 "data_offset": 2048, 00:10:05.627 "data_size": 63488 00:10:05.627 }, 00:10:05.627 { 00:10:05.627 "name": "BaseBdev3", 00:10:05.627 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:05.627 "is_configured": true, 00:10:05.627 "data_offset": 2048, 00:10:05.627 "data_size": 63488 00:10:05.627 } 00:10:05.627 ] 00:10:05.627 }' 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.627 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.886 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e0f0f515-3778-4af6-81e5-90fdff7fba71 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.146 [2024-12-14 12:36:05.711676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.146 [2024-12-14 12:36:05.711894] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:06.146 [2024-12-14 12:36:05.711907] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:06.146 [2024-12-14 12:36:05.712177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:06.146 [2024-12-14 12:36:05.712338] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:06.146 [2024-12-14 12:36:05.712349] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:06.146 NewBaseBdev 00:10:06.146 [2024-12-14 12:36:05.712499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.146 [ 00:10:06.146 { 00:10:06.146 "name": "NewBaseBdev", 00:10:06.146 "aliases": [ 00:10:06.146 "e0f0f515-3778-4af6-81e5-90fdff7fba71" 00:10:06.146 ], 00:10:06.146 "product_name": "Malloc disk", 00:10:06.146 "block_size": 512, 00:10:06.146 "num_blocks": 65536, 00:10:06.146 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:06.146 "assigned_rate_limits": { 00:10:06.146 "rw_ios_per_sec": 0, 00:10:06.146 "rw_mbytes_per_sec": 0, 00:10:06.146 "r_mbytes_per_sec": 0, 00:10:06.146 "w_mbytes_per_sec": 0 00:10:06.146 }, 00:10:06.146 "claimed": true, 00:10:06.146 "claim_type": "exclusive_write", 00:10:06.146 "zoned": false, 00:10:06.146 "supported_io_types": { 00:10:06.146 "read": true, 00:10:06.146 "write": true, 00:10:06.146 "unmap": true, 00:10:06.146 "flush": true, 00:10:06.146 "reset": true, 00:10:06.146 "nvme_admin": false, 00:10:06.146 "nvme_io": false, 00:10:06.146 "nvme_io_md": false, 00:10:06.146 "write_zeroes": true, 00:10:06.146 "zcopy": true, 00:10:06.146 "get_zone_info": false, 00:10:06.146 "zone_management": false, 00:10:06.146 "zone_append": false, 00:10:06.146 "compare": false, 00:10:06.146 "compare_and_write": false, 00:10:06.146 "abort": true, 00:10:06.146 "seek_hole": false, 00:10:06.146 "seek_data": false, 00:10:06.146 "copy": true, 00:10:06.146 "nvme_iov_md": false 00:10:06.146 }, 00:10:06.146 "memory_domains": [ 00:10:06.146 { 00:10:06.146 "dma_device_id": "system", 00:10:06.146 "dma_device_type": 1 00:10:06.146 }, 00:10:06.146 { 00:10:06.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.146 "dma_device_type": 2 00:10:06.146 } 00:10:06.146 ], 00:10:06.146 "driver_specific": {} 00:10:06.146 } 00:10:06.146 ] 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.146 "name": "Existed_Raid", 00:10:06.146 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:06.146 "strip_size_kb": 0, 00:10:06.146 "state": "online", 00:10:06.146 "raid_level": "raid1", 00:10:06.146 "superblock": true, 00:10:06.146 "num_base_bdevs": 3, 00:10:06.146 "num_base_bdevs_discovered": 3, 00:10:06.146 "num_base_bdevs_operational": 3, 00:10:06.146 "base_bdevs_list": [ 00:10:06.146 { 00:10:06.146 "name": "NewBaseBdev", 00:10:06.146 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:06.146 "is_configured": true, 00:10:06.146 "data_offset": 2048, 00:10:06.146 "data_size": 63488 00:10:06.146 }, 00:10:06.146 { 00:10:06.146 "name": "BaseBdev2", 00:10:06.146 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:06.146 "is_configured": true, 00:10:06.146 "data_offset": 2048, 00:10:06.146 "data_size": 63488 00:10:06.146 }, 00:10:06.146 { 00:10:06.146 "name": "BaseBdev3", 00:10:06.146 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:06.146 "is_configured": true, 00:10:06.146 "data_offset": 2048, 00:10:06.146 "data_size": 63488 00:10:06.146 } 00:10:06.146 ] 00:10:06.146 }' 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.146 12:36:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.715 [2024-12-14 12:36:06.207255] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.715 "name": "Existed_Raid", 00:10:06.715 "aliases": [ 00:10:06.715 "6da3fd68-50fd-42fc-81ca-5972810c1321" 00:10:06.715 ], 00:10:06.715 "product_name": "Raid Volume", 00:10:06.715 "block_size": 512, 00:10:06.715 "num_blocks": 63488, 00:10:06.715 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:06.715 "assigned_rate_limits": { 00:10:06.715 "rw_ios_per_sec": 0, 00:10:06.715 "rw_mbytes_per_sec": 0, 00:10:06.715 "r_mbytes_per_sec": 0, 00:10:06.715 "w_mbytes_per_sec": 0 00:10:06.715 }, 00:10:06.715 "claimed": false, 00:10:06.715 "zoned": false, 00:10:06.715 "supported_io_types": { 00:10:06.715 "read": true, 00:10:06.715 "write": true, 00:10:06.715 "unmap": false, 00:10:06.715 "flush": false, 00:10:06.715 "reset": true, 00:10:06.715 "nvme_admin": false, 00:10:06.715 "nvme_io": false, 00:10:06.715 "nvme_io_md": false, 00:10:06.715 "write_zeroes": true, 00:10:06.715 "zcopy": false, 00:10:06.715 "get_zone_info": false, 00:10:06.715 "zone_management": false, 00:10:06.715 "zone_append": false, 00:10:06.715 "compare": false, 00:10:06.715 "compare_and_write": false, 00:10:06.715 "abort": false, 00:10:06.715 "seek_hole": false, 00:10:06.715 "seek_data": false, 00:10:06.715 "copy": false, 00:10:06.715 "nvme_iov_md": false 00:10:06.715 }, 00:10:06.715 "memory_domains": [ 00:10:06.715 { 00:10:06.715 "dma_device_id": "system", 00:10:06.715 "dma_device_type": 1 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.715 "dma_device_type": 2 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "dma_device_id": "system", 00:10:06.715 "dma_device_type": 1 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.715 "dma_device_type": 2 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "dma_device_id": "system", 00:10:06.715 "dma_device_type": 1 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.715 "dma_device_type": 2 00:10:06.715 } 00:10:06.715 ], 00:10:06.715 "driver_specific": { 00:10:06.715 "raid": { 00:10:06.715 "uuid": "6da3fd68-50fd-42fc-81ca-5972810c1321", 00:10:06.715 "strip_size_kb": 0, 00:10:06.715 "state": "online", 00:10:06.715 "raid_level": "raid1", 00:10:06.715 "superblock": true, 00:10:06.715 "num_base_bdevs": 3, 00:10:06.715 "num_base_bdevs_discovered": 3, 00:10:06.715 "num_base_bdevs_operational": 3, 00:10:06.715 "base_bdevs_list": [ 00:10:06.715 { 00:10:06.715 "name": "NewBaseBdev", 00:10:06.715 "uuid": "e0f0f515-3778-4af6-81e5-90fdff7fba71", 00:10:06.715 "is_configured": true, 00:10:06.715 "data_offset": 2048, 00:10:06.715 "data_size": 63488 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "name": "BaseBdev2", 00:10:06.715 "uuid": "cabfeb84-c68f-40d9-b67d-fc1220a0d6af", 00:10:06.715 "is_configured": true, 00:10:06.715 "data_offset": 2048, 00:10:06.715 "data_size": 63488 00:10:06.715 }, 00:10:06.715 { 00:10:06.715 "name": "BaseBdev3", 00:10:06.715 "uuid": "64a954d9-68d9-493d-95a9-6f891f6dbc48", 00:10:06.715 "is_configured": true, 00:10:06.715 "data_offset": 2048, 00:10:06.715 "data_size": 63488 00:10:06.715 } 00:10:06.715 ] 00:10:06.715 } 00:10:06.715 } 00:10:06.715 }' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:06.715 BaseBdev2 00:10:06.715 BaseBdev3' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.715 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.975 [2024-12-14 12:36:06.466437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.975 [2024-12-14 12:36:06.466469] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.975 [2024-12-14 12:36:06.466541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.975 [2024-12-14 12:36:06.466823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.975 [2024-12-14 12:36:06.466834] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69813 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69813 ']' 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69813 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69813 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.975 killing process with pid 69813 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69813' 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69813 00:10:06.975 [2024-12-14 12:36:06.504652] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:06.975 12:36:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69813 00:10:07.235 [2024-12-14 12:36:06.810863] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.629 12:36:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:08.629 00:10:08.629 real 0m10.720s 00:10:08.629 user 0m17.091s 00:10:08.629 sys 0m1.787s 00:10:08.629 12:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.629 12:36:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.629 ************************************ 00:10:08.629 END TEST raid_state_function_test_sb 00:10:08.629 ************************************ 00:10:08.629 12:36:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:08.629 12:36:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.629 12:36:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.629 12:36:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.629 ************************************ 00:10:08.629 START TEST raid_superblock_test 00:10:08.629 ************************************ 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70434 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70434 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70434 ']' 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.629 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.629 [2024-12-14 12:36:08.112124] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:08.629 [2024-12-14 12:36:08.112325] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70434 ] 00:10:08.629 [2024-12-14 12:36:08.286590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.927 [2024-12-14 12:36:08.409364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.927 [2024-12-14 12:36:08.607948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.927 [2024-12-14 12:36:08.608017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 malloc1 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 12:36:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 [2024-12-14 12:36:09.001995] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:09.510 [2024-12-14 12:36:09.002132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.510 [2024-12-14 12:36:09.002173] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:09.510 [2024-12-14 12:36:09.002203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.510 [2024-12-14 12:36:09.004288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.510 [2024-12-14 12:36:09.004355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:09.510 pt1 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 malloc2 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 [2024-12-14 12:36:09.061611] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.510 [2024-12-14 12:36:09.061726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.510 [2024-12-14 12:36:09.061758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:09.510 [2024-12-14 12:36:09.061768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.510 [2024-12-14 12:36:09.064129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.510 [2024-12-14 12:36:09.064163] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.510 pt2 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 malloc3 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 [2024-12-14 12:36:09.128870] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:09.510 [2024-12-14 12:36:09.128930] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.510 [2024-12-14 12:36:09.128951] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:09.510 [2024-12-14 12:36:09.128959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.510 [2024-12-14 12:36:09.131312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.510 [2024-12-14 12:36:09.131411] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:09.510 pt3 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 [2024-12-14 12:36:09.140887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:09.510 [2024-12-14 12:36:09.142950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.510 [2024-12-14 12:36:09.143112] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:09.510 [2024-12-14 12:36:09.143314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:09.510 [2024-12-14 12:36:09.143337] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:09.510 [2024-12-14 12:36:09.143624] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:09.510 [2024-12-14 12:36:09.143808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:09.510 [2024-12-14 12:36:09.143821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:09.510 [2024-12-14 12:36:09.143999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.510 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.510 "name": "raid_bdev1", 00:10:09.510 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:09.510 "strip_size_kb": 0, 00:10:09.510 "state": "online", 00:10:09.510 "raid_level": "raid1", 00:10:09.510 "superblock": true, 00:10:09.510 "num_base_bdevs": 3, 00:10:09.510 "num_base_bdevs_discovered": 3, 00:10:09.510 "num_base_bdevs_operational": 3, 00:10:09.510 "base_bdevs_list": [ 00:10:09.510 { 00:10:09.510 "name": "pt1", 00:10:09.510 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.510 "is_configured": true, 00:10:09.510 "data_offset": 2048, 00:10:09.510 "data_size": 63488 00:10:09.510 }, 00:10:09.510 { 00:10:09.510 "name": "pt2", 00:10:09.510 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.511 "is_configured": true, 00:10:09.511 "data_offset": 2048, 00:10:09.511 "data_size": 63488 00:10:09.511 }, 00:10:09.511 { 00:10:09.511 "name": "pt3", 00:10:09.511 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.511 "is_configured": true, 00:10:09.511 "data_offset": 2048, 00:10:09.511 "data_size": 63488 00:10:09.511 } 00:10:09.511 ] 00:10:09.511 }' 00:10:09.511 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.511 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 [2024-12-14 12:36:09.588443] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.076 "name": "raid_bdev1", 00:10:10.076 "aliases": [ 00:10:10.076 "52bc476a-eda0-4a45-9aa6-bd4cc278ea99" 00:10:10.076 ], 00:10:10.076 "product_name": "Raid Volume", 00:10:10.076 "block_size": 512, 00:10:10.076 "num_blocks": 63488, 00:10:10.076 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:10.076 "assigned_rate_limits": { 00:10:10.076 "rw_ios_per_sec": 0, 00:10:10.076 "rw_mbytes_per_sec": 0, 00:10:10.076 "r_mbytes_per_sec": 0, 00:10:10.076 "w_mbytes_per_sec": 0 00:10:10.076 }, 00:10:10.076 "claimed": false, 00:10:10.076 "zoned": false, 00:10:10.076 "supported_io_types": { 00:10:10.076 "read": true, 00:10:10.076 "write": true, 00:10:10.076 "unmap": false, 00:10:10.076 "flush": false, 00:10:10.076 "reset": true, 00:10:10.076 "nvme_admin": false, 00:10:10.076 "nvme_io": false, 00:10:10.076 "nvme_io_md": false, 00:10:10.076 "write_zeroes": true, 00:10:10.076 "zcopy": false, 00:10:10.076 "get_zone_info": false, 00:10:10.076 "zone_management": false, 00:10:10.076 "zone_append": false, 00:10:10.076 "compare": false, 00:10:10.076 "compare_and_write": false, 00:10:10.076 "abort": false, 00:10:10.076 "seek_hole": false, 00:10:10.076 "seek_data": false, 00:10:10.076 "copy": false, 00:10:10.076 "nvme_iov_md": false 00:10:10.076 }, 00:10:10.076 "memory_domains": [ 00:10:10.076 { 00:10:10.076 "dma_device_id": "system", 00:10:10.076 "dma_device_type": 1 00:10:10.076 }, 00:10:10.076 { 00:10:10.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.076 "dma_device_type": 2 00:10:10.076 }, 00:10:10.076 { 00:10:10.076 "dma_device_id": "system", 00:10:10.076 "dma_device_type": 1 00:10:10.076 }, 00:10:10.076 { 00:10:10.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.076 "dma_device_type": 2 00:10:10.076 }, 00:10:10.076 { 00:10:10.076 "dma_device_id": "system", 00:10:10.076 "dma_device_type": 1 00:10:10.076 }, 00:10:10.076 { 00:10:10.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.076 "dma_device_type": 2 00:10:10.076 } 00:10:10.076 ], 00:10:10.076 "driver_specific": { 00:10:10.076 "raid": { 00:10:10.076 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:10.076 "strip_size_kb": 0, 00:10:10.076 "state": "online", 00:10:10.076 "raid_level": "raid1", 00:10:10.076 "superblock": true, 00:10:10.076 "num_base_bdevs": 3, 00:10:10.076 "num_base_bdevs_discovered": 3, 00:10:10.076 "num_base_bdevs_operational": 3, 00:10:10.076 "base_bdevs_list": [ 00:10:10.076 { 00:10:10.076 "name": "pt1", 00:10:10.076 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.076 "is_configured": true, 00:10:10.076 "data_offset": 2048, 00:10:10.076 "data_size": 63488 00:10:10.076 }, 00:10:10.076 { 00:10:10.076 "name": "pt2", 00:10:10.076 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.076 "is_configured": true, 00:10:10.076 "data_offset": 2048, 00:10:10.076 "data_size": 63488 00:10:10.076 }, 00:10:10.076 { 00:10:10.076 "name": "pt3", 00:10:10.076 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.076 "is_configured": true, 00:10:10.076 "data_offset": 2048, 00:10:10.076 "data_size": 63488 00:10:10.076 } 00:10:10.076 ] 00:10:10.076 } 00:10:10.076 } 00:10:10.076 }' 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:10.076 pt2 00:10:10.076 pt3' 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.076 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.077 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.077 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.077 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.077 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.077 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:10.077 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.077 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.077 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:10.335 [2024-12-14 12:36:09.879917] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=52bc476a-eda0-4a45-9aa6-bd4cc278ea99 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 52bc476a-eda0-4a45-9aa6-bd4cc278ea99 ']' 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 [2024-12-14 12:36:09.927516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.335 [2024-12-14 12:36:09.927546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.335 [2024-12-14 12:36:09.927631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.335 [2024-12-14 12:36:09.927709] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.335 [2024-12-14 12:36:09.927719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.335 12:36:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.335 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.594 [2024-12-14 12:36:10.075316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:10.594 [2024-12-14 12:36:10.077313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:10.594 [2024-12-14 12:36:10.077374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:10.594 [2024-12-14 12:36:10.077428] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:10.594 [2024-12-14 12:36:10.077480] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:10.594 [2024-12-14 12:36:10.077500] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:10.594 [2024-12-14 12:36:10.077516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.594 [2024-12-14 12:36:10.077526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:10.594 request: 00:10:10.594 { 00:10:10.594 "name": "raid_bdev1", 00:10:10.594 "raid_level": "raid1", 00:10:10.594 "base_bdevs": [ 00:10:10.594 "malloc1", 00:10:10.594 "malloc2", 00:10:10.594 "malloc3" 00:10:10.594 ], 00:10:10.594 "superblock": false, 00:10:10.594 "method": "bdev_raid_create", 00:10:10.594 "req_id": 1 00:10:10.594 } 00:10:10.594 Got JSON-RPC error response 00:10:10.594 response: 00:10:10.594 { 00:10:10.594 "code": -17, 00:10:10.594 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:10.594 } 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.594 [2024-12-14 12:36:10.143191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.594 [2024-12-14 12:36:10.143332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.594 [2024-12-14 12:36:10.143375] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:10.594 [2024-12-14 12:36:10.143409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.594 [2024-12-14 12:36:10.145681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.594 [2024-12-14 12:36:10.145753] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.594 [2024-12-14 12:36:10.145866] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:10.594 [2024-12-14 12:36:10.145965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.594 pt1 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.594 "name": "raid_bdev1", 00:10:10.594 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:10.594 "strip_size_kb": 0, 00:10:10.594 "state": "configuring", 00:10:10.594 "raid_level": "raid1", 00:10:10.594 "superblock": true, 00:10:10.594 "num_base_bdevs": 3, 00:10:10.594 "num_base_bdevs_discovered": 1, 00:10:10.594 "num_base_bdevs_operational": 3, 00:10:10.594 "base_bdevs_list": [ 00:10:10.594 { 00:10:10.594 "name": "pt1", 00:10:10.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.594 "is_configured": true, 00:10:10.594 "data_offset": 2048, 00:10:10.594 "data_size": 63488 00:10:10.594 }, 00:10:10.594 { 00:10:10.594 "name": null, 00:10:10.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.594 "is_configured": false, 00:10:10.594 "data_offset": 2048, 00:10:10.594 "data_size": 63488 00:10:10.594 }, 00:10:10.594 { 00:10:10.594 "name": null, 00:10:10.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.594 "is_configured": false, 00:10:10.594 "data_offset": 2048, 00:10:10.594 "data_size": 63488 00:10:10.594 } 00:10:10.594 ] 00:10:10.594 }' 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.594 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.161 [2024-12-14 12:36:10.602404] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.161 [2024-12-14 12:36:10.602470] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.161 [2024-12-14 12:36:10.602493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:11.161 [2024-12-14 12:36:10.602502] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.161 [2024-12-14 12:36:10.602961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.161 [2024-12-14 12:36:10.602994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.161 [2024-12-14 12:36:10.603108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.161 [2024-12-14 12:36:10.603135] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.161 pt2 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.161 [2024-12-14 12:36:10.610374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.161 "name": "raid_bdev1", 00:10:11.161 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:11.161 "strip_size_kb": 0, 00:10:11.161 "state": "configuring", 00:10:11.161 "raid_level": "raid1", 00:10:11.161 "superblock": true, 00:10:11.161 "num_base_bdevs": 3, 00:10:11.161 "num_base_bdevs_discovered": 1, 00:10:11.161 "num_base_bdevs_operational": 3, 00:10:11.161 "base_bdevs_list": [ 00:10:11.161 { 00:10:11.161 "name": "pt1", 00:10:11.161 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.161 "is_configured": true, 00:10:11.161 "data_offset": 2048, 00:10:11.161 "data_size": 63488 00:10:11.161 }, 00:10:11.161 { 00:10:11.161 "name": null, 00:10:11.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.161 "is_configured": false, 00:10:11.161 "data_offset": 0, 00:10:11.161 "data_size": 63488 00:10:11.161 }, 00:10:11.161 { 00:10:11.161 "name": null, 00:10:11.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.161 "is_configured": false, 00:10:11.161 "data_offset": 2048, 00:10:11.161 "data_size": 63488 00:10:11.161 } 00:10:11.161 ] 00:10:11.161 }' 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.161 12:36:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.419 [2024-12-14 12:36:11.041638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.419 [2024-12-14 12:36:11.041768] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.419 [2024-12-14 12:36:11.041821] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:11.419 [2024-12-14 12:36:11.041868] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.419 [2024-12-14 12:36:11.042407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.419 [2024-12-14 12:36:11.042473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.419 [2024-12-14 12:36:11.042591] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.419 [2024-12-14 12:36:11.042660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.419 pt2 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.419 [2024-12-14 12:36:11.053578] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:11.419 [2024-12-14 12:36:11.053659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.419 [2024-12-14 12:36:11.053689] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:11.419 [2024-12-14 12:36:11.053720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.419 [2024-12-14 12:36:11.054167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.419 [2024-12-14 12:36:11.054232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:11.419 [2024-12-14 12:36:11.054341] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:11.419 [2024-12-14 12:36:11.054395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:11.419 [2024-12-14 12:36:11.054560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.419 [2024-12-14 12:36:11.054606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:11.419 [2024-12-14 12:36:11.054884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:11.419 [2024-12-14 12:36:11.055116] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.419 [2024-12-14 12:36:11.055160] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:11.419 [2024-12-14 12:36:11.055357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.419 pt3 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:11.419 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.420 "name": "raid_bdev1", 00:10:11.420 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:11.420 "strip_size_kb": 0, 00:10:11.420 "state": "online", 00:10:11.420 "raid_level": "raid1", 00:10:11.420 "superblock": true, 00:10:11.420 "num_base_bdevs": 3, 00:10:11.420 "num_base_bdevs_discovered": 3, 00:10:11.420 "num_base_bdevs_operational": 3, 00:10:11.420 "base_bdevs_list": [ 00:10:11.420 { 00:10:11.420 "name": "pt1", 00:10:11.420 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.420 "is_configured": true, 00:10:11.420 "data_offset": 2048, 00:10:11.420 "data_size": 63488 00:10:11.420 }, 00:10:11.420 { 00:10:11.420 "name": "pt2", 00:10:11.420 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.420 "is_configured": true, 00:10:11.420 "data_offset": 2048, 00:10:11.420 "data_size": 63488 00:10:11.420 }, 00:10:11.420 { 00:10:11.420 "name": "pt3", 00:10:11.420 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.420 "is_configured": true, 00:10:11.420 "data_offset": 2048, 00:10:11.420 "data_size": 63488 00:10:11.420 } 00:10:11.420 ] 00:10:11.420 }' 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.420 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.986 [2024-12-14 12:36:11.517136] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.986 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:11.986 "name": "raid_bdev1", 00:10:11.986 "aliases": [ 00:10:11.986 "52bc476a-eda0-4a45-9aa6-bd4cc278ea99" 00:10:11.986 ], 00:10:11.986 "product_name": "Raid Volume", 00:10:11.986 "block_size": 512, 00:10:11.986 "num_blocks": 63488, 00:10:11.986 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:11.986 "assigned_rate_limits": { 00:10:11.986 "rw_ios_per_sec": 0, 00:10:11.986 "rw_mbytes_per_sec": 0, 00:10:11.986 "r_mbytes_per_sec": 0, 00:10:11.986 "w_mbytes_per_sec": 0 00:10:11.986 }, 00:10:11.986 "claimed": false, 00:10:11.986 "zoned": false, 00:10:11.986 "supported_io_types": { 00:10:11.986 "read": true, 00:10:11.986 "write": true, 00:10:11.986 "unmap": false, 00:10:11.986 "flush": false, 00:10:11.986 "reset": true, 00:10:11.986 "nvme_admin": false, 00:10:11.986 "nvme_io": false, 00:10:11.986 "nvme_io_md": false, 00:10:11.986 "write_zeroes": true, 00:10:11.986 "zcopy": false, 00:10:11.986 "get_zone_info": false, 00:10:11.986 "zone_management": false, 00:10:11.986 "zone_append": false, 00:10:11.986 "compare": false, 00:10:11.986 "compare_and_write": false, 00:10:11.986 "abort": false, 00:10:11.986 "seek_hole": false, 00:10:11.986 "seek_data": false, 00:10:11.986 "copy": false, 00:10:11.986 "nvme_iov_md": false 00:10:11.986 }, 00:10:11.986 "memory_domains": [ 00:10:11.986 { 00:10:11.986 "dma_device_id": "system", 00:10:11.986 "dma_device_type": 1 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.986 "dma_device_type": 2 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "dma_device_id": "system", 00:10:11.986 "dma_device_type": 1 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.986 "dma_device_type": 2 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "dma_device_id": "system", 00:10:11.986 "dma_device_type": 1 00:10:11.986 }, 00:10:11.986 { 00:10:11.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.986 "dma_device_type": 2 00:10:11.986 } 00:10:11.986 ], 00:10:11.986 "driver_specific": { 00:10:11.986 "raid": { 00:10:11.986 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:11.986 "strip_size_kb": 0, 00:10:11.986 "state": "online", 00:10:11.986 "raid_level": "raid1", 00:10:11.986 "superblock": true, 00:10:11.986 "num_base_bdevs": 3, 00:10:11.986 "num_base_bdevs_discovered": 3, 00:10:11.986 "num_base_bdevs_operational": 3, 00:10:11.986 "base_bdevs_list": [ 00:10:11.986 { 00:10:11.986 "name": "pt1", 00:10:11.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.986 "is_configured": true, 00:10:11.986 "data_offset": 2048, 00:10:11.986 "data_size": 63488 00:10:11.986 }, 00:10:11.986 { 00:10:11.987 "name": "pt2", 00:10:11.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.987 "is_configured": true, 00:10:11.987 "data_offset": 2048, 00:10:11.987 "data_size": 63488 00:10:11.987 }, 00:10:11.987 { 00:10:11.987 "name": "pt3", 00:10:11.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.987 "is_configured": true, 00:10:11.987 "data_offset": 2048, 00:10:11.987 "data_size": 63488 00:10:11.987 } 00:10:11.987 ] 00:10:11.987 } 00:10:11.987 } 00:10:11.987 }' 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:11.987 pt2 00:10:11.987 pt3' 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.987 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.245 [2024-12-14 12:36:11.804692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 52bc476a-eda0-4a45-9aa6-bd4cc278ea99 '!=' 52bc476a-eda0-4a45-9aa6-bd4cc278ea99 ']' 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.245 [2024-12-14 12:36:11.848291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.245 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.245 "name": "raid_bdev1", 00:10:12.245 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:12.245 "strip_size_kb": 0, 00:10:12.245 "state": "online", 00:10:12.245 "raid_level": "raid1", 00:10:12.245 "superblock": true, 00:10:12.245 "num_base_bdevs": 3, 00:10:12.245 "num_base_bdevs_discovered": 2, 00:10:12.245 "num_base_bdevs_operational": 2, 00:10:12.245 "base_bdevs_list": [ 00:10:12.245 { 00:10:12.245 "name": null, 00:10:12.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.245 "is_configured": false, 00:10:12.246 "data_offset": 0, 00:10:12.246 "data_size": 63488 00:10:12.246 }, 00:10:12.246 { 00:10:12.246 "name": "pt2", 00:10:12.246 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.246 "is_configured": true, 00:10:12.246 "data_offset": 2048, 00:10:12.246 "data_size": 63488 00:10:12.246 }, 00:10:12.246 { 00:10:12.246 "name": "pt3", 00:10:12.246 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.246 "is_configured": true, 00:10:12.246 "data_offset": 2048, 00:10:12.246 "data_size": 63488 00:10:12.246 } 00:10:12.246 ] 00:10:12.246 }' 00:10:12.246 12:36:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.246 12:36:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 [2024-12-14 12:36:12.263549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:12.811 [2024-12-14 12:36:12.263639] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:12.811 [2024-12-14 12:36:12.263744] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.811 [2024-12-14 12:36:12.263832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.811 [2024-12-14 12:36:12.263882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 [2024-12-14 12:36:12.347381] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.811 [2024-12-14 12:36:12.347502] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.811 [2024-12-14 12:36:12.347521] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:12.811 [2024-12-14 12:36:12.347532] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.811 [2024-12-14 12:36:12.349917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.811 [2024-12-14 12:36:12.349962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.811 [2024-12-14 12:36:12.350058] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:12.811 [2024-12-14 12:36:12.350126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.811 pt2 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.811 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.811 "name": "raid_bdev1", 00:10:12.811 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:12.812 "strip_size_kb": 0, 00:10:12.812 "state": "configuring", 00:10:12.812 "raid_level": "raid1", 00:10:12.812 "superblock": true, 00:10:12.812 "num_base_bdevs": 3, 00:10:12.812 "num_base_bdevs_discovered": 1, 00:10:12.812 "num_base_bdevs_operational": 2, 00:10:12.812 "base_bdevs_list": [ 00:10:12.812 { 00:10:12.812 "name": null, 00:10:12.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.812 "is_configured": false, 00:10:12.812 "data_offset": 2048, 00:10:12.812 "data_size": 63488 00:10:12.812 }, 00:10:12.812 { 00:10:12.812 "name": "pt2", 00:10:12.812 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.812 "is_configured": true, 00:10:12.812 "data_offset": 2048, 00:10:12.812 "data_size": 63488 00:10:12.812 }, 00:10:12.812 { 00:10:12.812 "name": null, 00:10:12.812 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.812 "is_configured": false, 00:10:12.812 "data_offset": 2048, 00:10:12.812 "data_size": 63488 00:10:12.812 } 00:10:12.812 ] 00:10:12.812 }' 00:10:12.812 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.812 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.070 [2024-12-14 12:36:12.750742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:13.070 [2024-12-14 12:36:12.750871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.070 [2024-12-14 12:36:12.750908] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:13.070 [2024-12-14 12:36:12.750938] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.070 [2024-12-14 12:36:12.751440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.070 [2024-12-14 12:36:12.751505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:13.070 [2024-12-14 12:36:12.751637] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:13.070 [2024-12-14 12:36:12.751695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:13.070 [2024-12-14 12:36:12.751843] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.070 [2024-12-14 12:36:12.751884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.070 [2024-12-14 12:36:12.752179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:13.070 [2024-12-14 12:36:12.752364] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.070 [2024-12-14 12:36:12.752405] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:13.070 [2024-12-14 12:36:12.752590] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.070 pt3 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.070 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.070 "name": "raid_bdev1", 00:10:13.070 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:13.070 "strip_size_kb": 0, 00:10:13.070 "state": "online", 00:10:13.070 "raid_level": "raid1", 00:10:13.070 "superblock": true, 00:10:13.070 "num_base_bdevs": 3, 00:10:13.070 "num_base_bdevs_discovered": 2, 00:10:13.070 "num_base_bdevs_operational": 2, 00:10:13.070 "base_bdevs_list": [ 00:10:13.070 { 00:10:13.070 "name": null, 00:10:13.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.070 "is_configured": false, 00:10:13.070 "data_offset": 2048, 00:10:13.070 "data_size": 63488 00:10:13.070 }, 00:10:13.070 { 00:10:13.070 "name": "pt2", 00:10:13.070 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.070 "is_configured": true, 00:10:13.070 "data_offset": 2048, 00:10:13.070 "data_size": 63488 00:10:13.070 }, 00:10:13.070 { 00:10:13.070 "name": "pt3", 00:10:13.070 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.070 "is_configured": true, 00:10:13.070 "data_offset": 2048, 00:10:13.070 "data_size": 63488 00:10:13.070 } 00:10:13.070 ] 00:10:13.070 }' 00:10:13.329 12:36:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.329 12:36:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.587 [2024-12-14 12:36:13.177988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.587 [2024-12-14 12:36:13.178023] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.587 [2024-12-14 12:36:13.178198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.587 [2024-12-14 12:36:13.178303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.587 [2024-12-14 12:36:13.178343] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.587 [2024-12-14 12:36:13.245868] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:13.587 [2024-12-14 12:36:13.245922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.587 [2024-12-14 12:36:13.245957] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:13.587 [2024-12-14 12:36:13.245965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.587 [2024-12-14 12:36:13.248137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.587 [2024-12-14 12:36:13.248174] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:13.587 [2024-12-14 12:36:13.248255] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:13.587 [2024-12-14 12:36:13.248314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:13.587 [2024-12-14 12:36:13.248433] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:13.587 [2024-12-14 12:36:13.248443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.587 [2024-12-14 12:36:13.248458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:13.587 [2024-12-14 12:36:13.248508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:13.587 pt1 00:10:13.587 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.588 "name": "raid_bdev1", 00:10:13.588 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:13.588 "strip_size_kb": 0, 00:10:13.588 "state": "configuring", 00:10:13.588 "raid_level": "raid1", 00:10:13.588 "superblock": true, 00:10:13.588 "num_base_bdevs": 3, 00:10:13.588 "num_base_bdevs_discovered": 1, 00:10:13.588 "num_base_bdevs_operational": 2, 00:10:13.588 "base_bdevs_list": [ 00:10:13.588 { 00:10:13.588 "name": null, 00:10:13.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.588 "is_configured": false, 00:10:13.588 "data_offset": 2048, 00:10:13.588 "data_size": 63488 00:10:13.588 }, 00:10:13.588 { 00:10:13.588 "name": "pt2", 00:10:13.588 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:13.588 "is_configured": true, 00:10:13.588 "data_offset": 2048, 00:10:13.588 "data_size": 63488 00:10:13.588 }, 00:10:13.588 { 00:10:13.588 "name": null, 00:10:13.588 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:13.588 "is_configured": false, 00:10:13.588 "data_offset": 2048, 00:10:13.588 "data_size": 63488 00:10:13.588 } 00:10:13.588 ] 00:10:13.588 }' 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.588 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.154 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.154 [2024-12-14 12:36:13.741033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:14.154 [2024-12-14 12:36:13.741156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.154 [2024-12-14 12:36:13.741186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:14.154 [2024-12-14 12:36:13.741195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.154 [2024-12-14 12:36:13.741716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.154 [2024-12-14 12:36:13.741735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:14.154 [2024-12-14 12:36:13.741828] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:14.154 [2024-12-14 12:36:13.741852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:14.154 [2024-12-14 12:36:13.741990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:14.154 [2024-12-14 12:36:13.742000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:14.154 [2024-12-14 12:36:13.742290] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:14.154 [2024-12-14 12:36:13.742478] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:14.155 [2024-12-14 12:36:13.742501] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:14.155 [2024-12-14 12:36:13.742669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.155 pt3 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.155 "name": "raid_bdev1", 00:10:14.155 "uuid": "52bc476a-eda0-4a45-9aa6-bd4cc278ea99", 00:10:14.155 "strip_size_kb": 0, 00:10:14.155 "state": "online", 00:10:14.155 "raid_level": "raid1", 00:10:14.155 "superblock": true, 00:10:14.155 "num_base_bdevs": 3, 00:10:14.155 "num_base_bdevs_discovered": 2, 00:10:14.155 "num_base_bdevs_operational": 2, 00:10:14.155 "base_bdevs_list": [ 00:10:14.155 { 00:10:14.155 "name": null, 00:10:14.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.155 "is_configured": false, 00:10:14.155 "data_offset": 2048, 00:10:14.155 "data_size": 63488 00:10:14.155 }, 00:10:14.155 { 00:10:14.155 "name": "pt2", 00:10:14.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:14.155 "is_configured": true, 00:10:14.155 "data_offset": 2048, 00:10:14.155 "data_size": 63488 00:10:14.155 }, 00:10:14.155 { 00:10:14.155 "name": "pt3", 00:10:14.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:14.155 "is_configured": true, 00:10:14.155 "data_offset": 2048, 00:10:14.155 "data_size": 63488 00:10:14.155 } 00:10:14.155 ] 00:10:14.155 }' 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.155 12:36:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.721 [2024-12-14 12:36:14.248467] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 52bc476a-eda0-4a45-9aa6-bd4cc278ea99 '!=' 52bc476a-eda0-4a45-9aa6-bd4cc278ea99 ']' 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70434 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70434 ']' 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70434 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70434 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70434' 00:10:14.721 killing process with pid 70434 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70434 00:10:14.721 [2024-12-14 12:36:14.322494] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.721 [2024-12-14 12:36:14.322650] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.721 12:36:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70434 00:10:14.721 [2024-12-14 12:36:14.322750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.721 [2024-12-14 12:36:14.322766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:14.979 [2024-12-14 12:36:14.635001] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:16.357 ************************************ 00:10:16.357 END TEST raid_superblock_test 00:10:16.357 ************************************ 00:10:16.357 12:36:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:16.357 00:10:16.357 real 0m7.751s 00:10:16.357 user 0m12.161s 00:10:16.357 sys 0m1.336s 00:10:16.357 12:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.357 12:36:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.357 12:36:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:16.357 12:36:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:16.357 12:36:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.357 12:36:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:16.357 ************************************ 00:10:16.357 START TEST raid_read_error_test 00:10:16.357 ************************************ 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.diCu2RNe5R 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70881 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70881 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70881 ']' 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:16.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:16.357 12:36:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.357 [2024-12-14 12:36:15.945796] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:16.357 [2024-12-14 12:36:15.945918] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70881 ] 00:10:16.616 [2024-12-14 12:36:16.119254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.616 [2024-12-14 12:36:16.231554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.874 [2024-12-14 12:36:16.428101] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.874 [2024-12-14 12:36:16.428137] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.132 BaseBdev1_malloc 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.132 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.132 true 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.133 [2024-12-14 12:36:16.844880] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:17.133 [2024-12-14 12:36:16.844937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.133 [2024-12-14 12:36:16.844955] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:17.133 [2024-12-14 12:36:16.844965] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.133 [2024-12-14 12:36:16.847049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.133 [2024-12-14 12:36:16.847088] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:17.133 BaseBdev1 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.133 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.391 BaseBdev2_malloc 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.391 true 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.391 [2024-12-14 12:36:16.911329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:17.391 [2024-12-14 12:36:16.911383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.391 [2024-12-14 12:36:16.911398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:17.391 [2024-12-14 12:36:16.911409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.391 [2024-12-14 12:36:16.913487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.391 [2024-12-14 12:36:16.913526] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:17.391 BaseBdev2 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.391 BaseBdev3_malloc 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.391 true 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.391 [2024-12-14 12:36:16.989683] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:17.391 [2024-12-14 12:36:16.989806] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:17.391 [2024-12-14 12:36:16.989831] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:17.391 [2024-12-14 12:36:16.989843] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:17.391 [2024-12-14 12:36:16.992034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:17.391 [2024-12-14 12:36:16.992077] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:17.391 BaseBdev3 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.391 12:36:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.391 [2024-12-14 12:36:17.001772] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:17.391 [2024-12-14 12:36:17.003707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.391 [2024-12-14 12:36:17.003842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:17.391 [2024-12-14 12:36:17.004101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:17.391 [2024-12-14 12:36:17.004116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:17.391 [2024-12-14 12:36:17.004389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:17.392 [2024-12-14 12:36:17.004564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:17.392 [2024-12-14 12:36:17.004576] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:17.392 [2024-12-14 12:36:17.004737] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.392 "name": "raid_bdev1", 00:10:17.392 "uuid": "e24ab907-3b31-4850-9ac7-e5774b28d5c0", 00:10:17.392 "strip_size_kb": 0, 00:10:17.392 "state": "online", 00:10:17.392 "raid_level": "raid1", 00:10:17.392 "superblock": true, 00:10:17.392 "num_base_bdevs": 3, 00:10:17.392 "num_base_bdevs_discovered": 3, 00:10:17.392 "num_base_bdevs_operational": 3, 00:10:17.392 "base_bdevs_list": [ 00:10:17.392 { 00:10:17.392 "name": "BaseBdev1", 00:10:17.392 "uuid": "b8cbd543-0175-567a-88cf-7beb28471869", 00:10:17.392 "is_configured": true, 00:10:17.392 "data_offset": 2048, 00:10:17.392 "data_size": 63488 00:10:17.392 }, 00:10:17.392 { 00:10:17.392 "name": "BaseBdev2", 00:10:17.392 "uuid": "1ef8089a-11b2-5e78-aa53-84b71295c11b", 00:10:17.392 "is_configured": true, 00:10:17.392 "data_offset": 2048, 00:10:17.392 "data_size": 63488 00:10:17.392 }, 00:10:17.392 { 00:10:17.392 "name": "BaseBdev3", 00:10:17.392 "uuid": "1e62e5c5-c4af-560a-9e5e-5c077345394a", 00:10:17.392 "is_configured": true, 00:10:17.392 "data_offset": 2048, 00:10:17.392 "data_size": 63488 00:10:17.392 } 00:10:17.392 ] 00:10:17.392 }' 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.392 12:36:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.958 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:17.958 12:36:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.958 [2024-12-14 12:36:17.570103] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.894 "name": "raid_bdev1", 00:10:18.894 "uuid": "e24ab907-3b31-4850-9ac7-e5774b28d5c0", 00:10:18.894 "strip_size_kb": 0, 00:10:18.894 "state": "online", 00:10:18.894 "raid_level": "raid1", 00:10:18.894 "superblock": true, 00:10:18.894 "num_base_bdevs": 3, 00:10:18.894 "num_base_bdevs_discovered": 3, 00:10:18.894 "num_base_bdevs_operational": 3, 00:10:18.894 "base_bdevs_list": [ 00:10:18.894 { 00:10:18.894 "name": "BaseBdev1", 00:10:18.894 "uuid": "b8cbd543-0175-567a-88cf-7beb28471869", 00:10:18.894 "is_configured": true, 00:10:18.894 "data_offset": 2048, 00:10:18.894 "data_size": 63488 00:10:18.894 }, 00:10:18.894 { 00:10:18.894 "name": "BaseBdev2", 00:10:18.894 "uuid": "1ef8089a-11b2-5e78-aa53-84b71295c11b", 00:10:18.894 "is_configured": true, 00:10:18.894 "data_offset": 2048, 00:10:18.894 "data_size": 63488 00:10:18.894 }, 00:10:18.894 { 00:10:18.894 "name": "BaseBdev3", 00:10:18.894 "uuid": "1e62e5c5-c4af-560a-9e5e-5c077345394a", 00:10:18.894 "is_configured": true, 00:10:18.894 "data_offset": 2048, 00:10:18.894 "data_size": 63488 00:10:18.894 } 00:10:18.894 ] 00:10:18.894 }' 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.894 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.460 [2024-12-14 12:36:18.904818] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.460 [2024-12-14 12:36:18.904948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.460 [2024-12-14 12:36:18.908241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.460 [2024-12-14 12:36:18.908339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.460 [2024-12-14 12:36:18.908499] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.460 [2024-12-14 12:36:18.908554] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:19.460 { 00:10:19.460 "results": [ 00:10:19.460 { 00:10:19.460 "job": "raid_bdev1", 00:10:19.460 "core_mask": "0x1", 00:10:19.460 "workload": "randrw", 00:10:19.460 "percentage": 50, 00:10:19.460 "status": "finished", 00:10:19.460 "queue_depth": 1, 00:10:19.460 "io_size": 131072, 00:10:19.460 "runtime": 1.335623, 00:10:19.460 "iops": 13086.776732655846, 00:10:19.460 "mibps": 1635.8470915819807, 00:10:19.460 "io_failed": 0, 00:10:19.460 "io_timeout": 0, 00:10:19.460 "avg_latency_us": 73.66478761413259, 00:10:19.460 "min_latency_us": 24.482096069868994, 00:10:19.460 "max_latency_us": 1688.482096069869 00:10:19.460 } 00:10:19.460 ], 00:10:19.460 "core_count": 1 00:10:19.460 } 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70881 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70881 ']' 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70881 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70881 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.460 killing process with pid 70881 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70881' 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70881 00:10:19.460 [2024-12-14 12:36:18.952910] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.460 12:36:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70881 00:10:19.460 [2024-12-14 12:36:19.186531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.diCu2RNe5R 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:20.835 00:10:20.835 real 0m4.551s 00:10:20.835 user 0m5.422s 00:10:20.835 sys 0m0.566s 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.835 ************************************ 00:10:20.835 END TEST raid_read_error_test 00:10:20.835 ************************************ 00:10:20.835 12:36:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.835 12:36:20 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:20.835 12:36:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:20.835 12:36:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.835 12:36:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.835 ************************************ 00:10:20.835 START TEST raid_write_error_test 00:10:20.835 ************************************ 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8YDA4Y72d2 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71031 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71031 00:10:20.835 12:36:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71031 ']' 00:10:20.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.836 12:36:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.836 12:36:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.836 12:36:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.836 12:36:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.836 12:36:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.836 [2024-12-14 12:36:20.570657] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:20.836 [2024-12-14 12:36:20.570782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71031 ] 00:10:21.094 [2024-12-14 12:36:20.745516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.352 [2024-12-14 12:36:20.867335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.352 [2024-12-14 12:36:21.066231] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.352 [2024-12-14 12:36:21.066322] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 BaseBdev1_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 true 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 [2024-12-14 12:36:21.463821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:21.919 [2024-12-14 12:36:21.463872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.919 [2024-12-14 12:36:21.463907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:21.919 [2024-12-14 12:36:21.463917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.919 [2024-12-14 12:36:21.465976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.919 [2024-12-14 12:36:21.466015] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:21.919 BaseBdev1 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 BaseBdev2_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 true 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 [2024-12-14 12:36:21.526962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:21.919 [2024-12-14 12:36:21.527013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.919 [2024-12-14 12:36:21.527030] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:21.919 [2024-12-14 12:36:21.527052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.919 [2024-12-14 12:36:21.529118] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.919 [2024-12-14 12:36:21.529204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:21.919 BaseBdev2 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 BaseBdev3_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 true 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 [2024-12-14 12:36:21.603812] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:21.919 [2024-12-14 12:36:21.603869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.919 [2024-12-14 12:36:21.603888] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:21.919 [2024-12-14 12:36:21.603899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.919 [2024-12-14 12:36:21.606200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.919 [2024-12-14 12:36:21.606237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:21.919 BaseBdev3 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.919 [2024-12-14 12:36:21.615872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.919 [2024-12-14 12:36:21.617858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.919 [2024-12-14 12:36:21.617983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.919 [2024-12-14 12:36:21.618251] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:21.919 [2024-12-14 12:36:21.618268] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.919 [2024-12-14 12:36:21.618560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:21.919 [2024-12-14 12:36:21.618750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:21.919 [2024-12-14 12:36:21.618762] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:21.919 [2024-12-14 12:36:21.618942] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.919 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.920 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.920 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.200 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.200 "name": "raid_bdev1", 00:10:22.200 "uuid": "fc1cba96-196d-4102-9c01-edc538a00c08", 00:10:22.200 "strip_size_kb": 0, 00:10:22.200 "state": "online", 00:10:22.200 "raid_level": "raid1", 00:10:22.200 "superblock": true, 00:10:22.200 "num_base_bdevs": 3, 00:10:22.200 "num_base_bdevs_discovered": 3, 00:10:22.200 "num_base_bdevs_operational": 3, 00:10:22.200 "base_bdevs_list": [ 00:10:22.200 { 00:10:22.200 "name": "BaseBdev1", 00:10:22.200 "uuid": "ebd077ee-0176-5e16-b9d5-e02f9cea368c", 00:10:22.200 "is_configured": true, 00:10:22.200 "data_offset": 2048, 00:10:22.200 "data_size": 63488 00:10:22.200 }, 00:10:22.200 { 00:10:22.200 "name": "BaseBdev2", 00:10:22.200 "uuid": "386ebb90-8918-5e22-b377-9dfd4cae707e", 00:10:22.200 "is_configured": true, 00:10:22.200 "data_offset": 2048, 00:10:22.200 "data_size": 63488 00:10:22.200 }, 00:10:22.200 { 00:10:22.200 "name": "BaseBdev3", 00:10:22.200 "uuid": "56eb1b49-4d9c-559f-8773-c18beab34cbd", 00:10:22.200 "is_configured": true, 00:10:22.200 "data_offset": 2048, 00:10:22.200 "data_size": 63488 00:10:22.200 } 00:10:22.200 ] 00:10:22.200 }' 00:10:22.200 12:36:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.200 12:36:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.458 12:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:22.458 12:36:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:22.458 [2024-12-14 12:36:22.172548] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.396 [2024-12-14 12:36:23.087590] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:23.396 [2024-12-14 12:36:23.087742] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.396 [2024-12-14 12:36:23.088011] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.396 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.656 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.656 "name": "raid_bdev1", 00:10:23.656 "uuid": "fc1cba96-196d-4102-9c01-edc538a00c08", 00:10:23.656 "strip_size_kb": 0, 00:10:23.656 "state": "online", 00:10:23.656 "raid_level": "raid1", 00:10:23.656 "superblock": true, 00:10:23.656 "num_base_bdevs": 3, 00:10:23.656 "num_base_bdevs_discovered": 2, 00:10:23.656 "num_base_bdevs_operational": 2, 00:10:23.656 "base_bdevs_list": [ 00:10:23.656 { 00:10:23.656 "name": null, 00:10:23.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.656 "is_configured": false, 00:10:23.656 "data_offset": 0, 00:10:23.656 "data_size": 63488 00:10:23.656 }, 00:10:23.656 { 00:10:23.656 "name": "BaseBdev2", 00:10:23.656 "uuid": "386ebb90-8918-5e22-b377-9dfd4cae707e", 00:10:23.656 "is_configured": true, 00:10:23.656 "data_offset": 2048, 00:10:23.656 "data_size": 63488 00:10:23.656 }, 00:10:23.656 { 00:10:23.656 "name": "BaseBdev3", 00:10:23.656 "uuid": "56eb1b49-4d9c-559f-8773-c18beab34cbd", 00:10:23.656 "is_configured": true, 00:10:23.656 "data_offset": 2048, 00:10:23.656 "data_size": 63488 00:10:23.656 } 00:10:23.656 ] 00:10:23.656 }' 00:10:23.656 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.656 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.914 [2024-12-14 12:36:23.549991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.914 [2024-12-14 12:36:23.550027] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.914 [2024-12-14 12:36:23.552951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.914 [2024-12-14 12:36:23.553058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.914 [2024-12-14 12:36:23.553160] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.914 [2024-12-14 12:36:23.553235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:23.914 { 00:10:23.914 "results": [ 00:10:23.914 { 00:10:23.914 "job": "raid_bdev1", 00:10:23.914 "core_mask": "0x1", 00:10:23.914 "workload": "randrw", 00:10:23.914 "percentage": 50, 00:10:23.914 "status": "finished", 00:10:23.914 "queue_depth": 1, 00:10:23.914 "io_size": 131072, 00:10:23.914 "runtime": 1.378098, 00:10:23.914 "iops": 14403.184679173759, 00:10:23.914 "mibps": 1800.3980848967199, 00:10:23.914 "io_failed": 0, 00:10:23.914 "io_timeout": 0, 00:10:23.914 "avg_latency_us": 66.61279982646272, 00:10:23.914 "min_latency_us": 24.258515283842794, 00:10:23.914 "max_latency_us": 1359.3711790393013 00:10:23.914 } 00:10:23.914 ], 00:10:23.914 "core_count": 1 00:10:23.914 } 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71031 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71031 ']' 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71031 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71031 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.914 killing process with pid 71031 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71031' 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71031 00:10:23.914 [2024-12-14 12:36:23.584760] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.914 12:36:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71031 00:10:24.173 [2024-12-14 12:36:23.818742] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8YDA4Y72d2 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:25.558 ************************************ 00:10:25.558 END TEST raid_write_error_test 00:10:25.558 ************************************ 00:10:25.558 00:10:25.558 real 0m4.556s 00:10:25.558 user 0m5.430s 00:10:25.558 sys 0m0.548s 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.558 12:36:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.558 12:36:25 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:25.558 12:36:25 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:25.558 12:36:25 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:25.558 12:36:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:25.558 12:36:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.558 12:36:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:25.558 ************************************ 00:10:25.558 START TEST raid_state_function_test 00:10:25.558 ************************************ 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71169 00:10:25.558 Process raid pid: 71169 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71169' 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71169 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71169 ']' 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.558 12:36:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.558 [2024-12-14 12:36:25.191099] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:25.558 [2024-12-14 12:36:25.191331] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.819 [2024-12-14 12:36:25.368002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.819 [2024-12-14 12:36:25.488985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.078 [2024-12-14 12:36:25.693719] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.078 [2024-12-14 12:36:25.693780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.338 [2024-12-14 12:36:26.036476] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.338 [2024-12-14 12:36:26.036534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.338 [2024-12-14 12:36:26.036544] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.338 [2024-12-14 12:36:26.036570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.338 [2024-12-14 12:36:26.036576] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.338 [2024-12-14 12:36:26.036585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.338 [2024-12-14 12:36:26.036592] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.338 [2024-12-14 12:36:26.036600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.338 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.598 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.598 "name": "Existed_Raid", 00:10:26.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.598 "strip_size_kb": 64, 00:10:26.598 "state": "configuring", 00:10:26.598 "raid_level": "raid0", 00:10:26.598 "superblock": false, 00:10:26.598 "num_base_bdevs": 4, 00:10:26.598 "num_base_bdevs_discovered": 0, 00:10:26.598 "num_base_bdevs_operational": 4, 00:10:26.598 "base_bdevs_list": [ 00:10:26.598 { 00:10:26.598 "name": "BaseBdev1", 00:10:26.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.598 "is_configured": false, 00:10:26.598 "data_offset": 0, 00:10:26.598 "data_size": 0 00:10:26.598 }, 00:10:26.598 { 00:10:26.598 "name": "BaseBdev2", 00:10:26.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.598 "is_configured": false, 00:10:26.598 "data_offset": 0, 00:10:26.598 "data_size": 0 00:10:26.598 }, 00:10:26.598 { 00:10:26.598 "name": "BaseBdev3", 00:10:26.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.598 "is_configured": false, 00:10:26.598 "data_offset": 0, 00:10:26.598 "data_size": 0 00:10:26.598 }, 00:10:26.598 { 00:10:26.598 "name": "BaseBdev4", 00:10:26.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.598 "is_configured": false, 00:10:26.598 "data_offset": 0, 00:10:26.598 "data_size": 0 00:10:26.598 } 00:10:26.598 ] 00:10:26.598 }' 00:10:26.598 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.598 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.859 [2024-12-14 12:36:26.403788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.859 [2024-12-14 12:36:26.403886] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.859 [2024-12-14 12:36:26.415756] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.859 [2024-12-14 12:36:26.415857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.859 [2024-12-14 12:36:26.415884] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.859 [2024-12-14 12:36:26.415908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.859 [2024-12-14 12:36:26.415926] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.859 [2024-12-14 12:36:26.415946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.859 [2024-12-14 12:36:26.415964] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:26.859 [2024-12-14 12:36:26.415985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.859 [2024-12-14 12:36:26.461229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.859 BaseBdev1 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.859 [ 00:10:26.859 { 00:10:26.859 "name": "BaseBdev1", 00:10:26.859 "aliases": [ 00:10:26.859 "91b41cc2-a3c0-4560-8cd0-f446a31d667a" 00:10:26.859 ], 00:10:26.859 "product_name": "Malloc disk", 00:10:26.859 "block_size": 512, 00:10:26.859 "num_blocks": 65536, 00:10:26.859 "uuid": "91b41cc2-a3c0-4560-8cd0-f446a31d667a", 00:10:26.859 "assigned_rate_limits": { 00:10:26.859 "rw_ios_per_sec": 0, 00:10:26.859 "rw_mbytes_per_sec": 0, 00:10:26.859 "r_mbytes_per_sec": 0, 00:10:26.859 "w_mbytes_per_sec": 0 00:10:26.859 }, 00:10:26.859 "claimed": true, 00:10:26.859 "claim_type": "exclusive_write", 00:10:26.859 "zoned": false, 00:10:26.859 "supported_io_types": { 00:10:26.859 "read": true, 00:10:26.859 "write": true, 00:10:26.859 "unmap": true, 00:10:26.859 "flush": true, 00:10:26.859 "reset": true, 00:10:26.859 "nvme_admin": false, 00:10:26.859 "nvme_io": false, 00:10:26.859 "nvme_io_md": false, 00:10:26.859 "write_zeroes": true, 00:10:26.859 "zcopy": true, 00:10:26.859 "get_zone_info": false, 00:10:26.859 "zone_management": false, 00:10:26.859 "zone_append": false, 00:10:26.859 "compare": false, 00:10:26.859 "compare_and_write": false, 00:10:26.859 "abort": true, 00:10:26.859 "seek_hole": false, 00:10:26.859 "seek_data": false, 00:10:26.859 "copy": true, 00:10:26.859 "nvme_iov_md": false 00:10:26.859 }, 00:10:26.859 "memory_domains": [ 00:10:26.859 { 00:10:26.859 "dma_device_id": "system", 00:10:26.859 "dma_device_type": 1 00:10:26.859 }, 00:10:26.859 { 00:10:26.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.859 "dma_device_type": 2 00:10:26.859 } 00:10:26.859 ], 00:10:26.859 "driver_specific": {} 00:10:26.859 } 00:10:26.859 ] 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.859 "name": "Existed_Raid", 00:10:26.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.859 "strip_size_kb": 64, 00:10:26.859 "state": "configuring", 00:10:26.859 "raid_level": "raid0", 00:10:26.859 "superblock": false, 00:10:26.859 "num_base_bdevs": 4, 00:10:26.859 "num_base_bdevs_discovered": 1, 00:10:26.859 "num_base_bdevs_operational": 4, 00:10:26.859 "base_bdevs_list": [ 00:10:26.859 { 00:10:26.859 "name": "BaseBdev1", 00:10:26.859 "uuid": "91b41cc2-a3c0-4560-8cd0-f446a31d667a", 00:10:26.859 "is_configured": true, 00:10:26.859 "data_offset": 0, 00:10:26.859 "data_size": 65536 00:10:26.859 }, 00:10:26.859 { 00:10:26.859 "name": "BaseBdev2", 00:10:26.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.859 "is_configured": false, 00:10:26.859 "data_offset": 0, 00:10:26.859 "data_size": 0 00:10:26.859 }, 00:10:26.859 { 00:10:26.859 "name": "BaseBdev3", 00:10:26.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.859 "is_configured": false, 00:10:26.859 "data_offset": 0, 00:10:26.859 "data_size": 0 00:10:26.859 }, 00:10:26.859 { 00:10:26.859 "name": "BaseBdev4", 00:10:26.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.859 "is_configured": false, 00:10:26.859 "data_offset": 0, 00:10:26.859 "data_size": 0 00:10:26.859 } 00:10:26.859 ] 00:10:26.859 }' 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.859 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.428 [2024-12-14 12:36:26.932471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:27.428 [2024-12-14 12:36:26.932531] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.428 [2024-12-14 12:36:26.944491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:27.428 [2024-12-14 12:36:26.946361] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:27.428 [2024-12-14 12:36:26.946455] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:27.428 [2024-12-14 12:36:26.946485] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:27.428 [2024-12-14 12:36:26.946510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:27.428 [2024-12-14 12:36:26.946529] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:27.428 [2024-12-14 12:36:26.946551] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.428 "name": "Existed_Raid", 00:10:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.428 "strip_size_kb": 64, 00:10:27.428 "state": "configuring", 00:10:27.428 "raid_level": "raid0", 00:10:27.428 "superblock": false, 00:10:27.428 "num_base_bdevs": 4, 00:10:27.428 "num_base_bdevs_discovered": 1, 00:10:27.428 "num_base_bdevs_operational": 4, 00:10:27.428 "base_bdevs_list": [ 00:10:27.428 { 00:10:27.428 "name": "BaseBdev1", 00:10:27.428 "uuid": "91b41cc2-a3c0-4560-8cd0-f446a31d667a", 00:10:27.428 "is_configured": true, 00:10:27.428 "data_offset": 0, 00:10:27.428 "data_size": 65536 00:10:27.428 }, 00:10:27.428 { 00:10:27.428 "name": "BaseBdev2", 00:10:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.428 "is_configured": false, 00:10:27.428 "data_offset": 0, 00:10:27.428 "data_size": 0 00:10:27.428 }, 00:10:27.428 { 00:10:27.428 "name": "BaseBdev3", 00:10:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.428 "is_configured": false, 00:10:27.428 "data_offset": 0, 00:10:27.428 "data_size": 0 00:10:27.428 }, 00:10:27.428 { 00:10:27.428 "name": "BaseBdev4", 00:10:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.428 "is_configured": false, 00:10:27.428 "data_offset": 0, 00:10:27.428 "data_size": 0 00:10:27.428 } 00:10:27.428 ] 00:10:27.428 }' 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.428 12:36:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.687 [2024-12-14 12:36:27.405555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.687 BaseBdev2 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.687 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.948 [ 00:10:27.948 { 00:10:27.948 "name": "BaseBdev2", 00:10:27.948 "aliases": [ 00:10:27.948 "4c9bf872-6c23-40fb-a003-8a64e078dd26" 00:10:27.948 ], 00:10:27.948 "product_name": "Malloc disk", 00:10:27.948 "block_size": 512, 00:10:27.948 "num_blocks": 65536, 00:10:27.948 "uuid": "4c9bf872-6c23-40fb-a003-8a64e078dd26", 00:10:27.948 "assigned_rate_limits": { 00:10:27.948 "rw_ios_per_sec": 0, 00:10:27.948 "rw_mbytes_per_sec": 0, 00:10:27.948 "r_mbytes_per_sec": 0, 00:10:27.948 "w_mbytes_per_sec": 0 00:10:27.948 }, 00:10:27.948 "claimed": true, 00:10:27.948 "claim_type": "exclusive_write", 00:10:27.948 "zoned": false, 00:10:27.948 "supported_io_types": { 00:10:27.948 "read": true, 00:10:27.948 "write": true, 00:10:27.948 "unmap": true, 00:10:27.948 "flush": true, 00:10:27.948 "reset": true, 00:10:27.948 "nvme_admin": false, 00:10:27.948 "nvme_io": false, 00:10:27.948 "nvme_io_md": false, 00:10:27.948 "write_zeroes": true, 00:10:27.948 "zcopy": true, 00:10:27.948 "get_zone_info": false, 00:10:27.948 "zone_management": false, 00:10:27.948 "zone_append": false, 00:10:27.948 "compare": false, 00:10:27.948 "compare_and_write": false, 00:10:27.948 "abort": true, 00:10:27.948 "seek_hole": false, 00:10:27.948 "seek_data": false, 00:10:27.948 "copy": true, 00:10:27.948 "nvme_iov_md": false 00:10:27.948 }, 00:10:27.948 "memory_domains": [ 00:10:27.948 { 00:10:27.948 "dma_device_id": "system", 00:10:27.948 "dma_device_type": 1 00:10:27.948 }, 00:10:27.948 { 00:10:27.948 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.948 "dma_device_type": 2 00:10:27.948 } 00:10:27.948 ], 00:10:27.948 "driver_specific": {} 00:10:27.948 } 00:10:27.948 ] 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.948 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.948 "name": "Existed_Raid", 00:10:27.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.948 "strip_size_kb": 64, 00:10:27.948 "state": "configuring", 00:10:27.948 "raid_level": "raid0", 00:10:27.948 "superblock": false, 00:10:27.948 "num_base_bdevs": 4, 00:10:27.949 "num_base_bdevs_discovered": 2, 00:10:27.949 "num_base_bdevs_operational": 4, 00:10:27.949 "base_bdevs_list": [ 00:10:27.949 { 00:10:27.949 "name": "BaseBdev1", 00:10:27.949 "uuid": "91b41cc2-a3c0-4560-8cd0-f446a31d667a", 00:10:27.949 "is_configured": true, 00:10:27.949 "data_offset": 0, 00:10:27.949 "data_size": 65536 00:10:27.949 }, 00:10:27.949 { 00:10:27.949 "name": "BaseBdev2", 00:10:27.949 "uuid": "4c9bf872-6c23-40fb-a003-8a64e078dd26", 00:10:27.949 "is_configured": true, 00:10:27.949 "data_offset": 0, 00:10:27.949 "data_size": 65536 00:10:27.949 }, 00:10:27.949 { 00:10:27.949 "name": "BaseBdev3", 00:10:27.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.949 "is_configured": false, 00:10:27.949 "data_offset": 0, 00:10:27.949 "data_size": 0 00:10:27.949 }, 00:10:27.949 { 00:10:27.949 "name": "BaseBdev4", 00:10:27.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.949 "is_configured": false, 00:10:27.949 "data_offset": 0, 00:10:27.949 "data_size": 0 00:10:27.949 } 00:10:27.949 ] 00:10:27.949 }' 00:10:27.949 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.949 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.209 [2024-12-14 12:36:27.893448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.209 BaseBdev3 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.209 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.210 [ 00:10:28.210 { 00:10:28.210 "name": "BaseBdev3", 00:10:28.210 "aliases": [ 00:10:28.210 "8a2decba-4346-4f71-b694-514de6bbe252" 00:10:28.210 ], 00:10:28.210 "product_name": "Malloc disk", 00:10:28.210 "block_size": 512, 00:10:28.210 "num_blocks": 65536, 00:10:28.210 "uuid": "8a2decba-4346-4f71-b694-514de6bbe252", 00:10:28.210 "assigned_rate_limits": { 00:10:28.210 "rw_ios_per_sec": 0, 00:10:28.210 "rw_mbytes_per_sec": 0, 00:10:28.210 "r_mbytes_per_sec": 0, 00:10:28.210 "w_mbytes_per_sec": 0 00:10:28.210 }, 00:10:28.210 "claimed": true, 00:10:28.210 "claim_type": "exclusive_write", 00:10:28.210 "zoned": false, 00:10:28.210 "supported_io_types": { 00:10:28.210 "read": true, 00:10:28.210 "write": true, 00:10:28.210 "unmap": true, 00:10:28.210 "flush": true, 00:10:28.210 "reset": true, 00:10:28.210 "nvme_admin": false, 00:10:28.210 "nvme_io": false, 00:10:28.210 "nvme_io_md": false, 00:10:28.210 "write_zeroes": true, 00:10:28.210 "zcopy": true, 00:10:28.210 "get_zone_info": false, 00:10:28.210 "zone_management": false, 00:10:28.210 "zone_append": false, 00:10:28.210 "compare": false, 00:10:28.210 "compare_and_write": false, 00:10:28.210 "abort": true, 00:10:28.210 "seek_hole": false, 00:10:28.210 "seek_data": false, 00:10:28.210 "copy": true, 00:10:28.210 "nvme_iov_md": false 00:10:28.210 }, 00:10:28.210 "memory_domains": [ 00:10:28.210 { 00:10:28.210 "dma_device_id": "system", 00:10:28.210 "dma_device_type": 1 00:10:28.210 }, 00:10:28.210 { 00:10:28.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.210 "dma_device_type": 2 00:10:28.210 } 00:10:28.210 ], 00:10:28.210 "driver_specific": {} 00:10:28.210 } 00:10:28.210 ] 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.210 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.470 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.470 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.470 "name": "Existed_Raid", 00:10:28.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.470 "strip_size_kb": 64, 00:10:28.470 "state": "configuring", 00:10:28.470 "raid_level": "raid0", 00:10:28.470 "superblock": false, 00:10:28.470 "num_base_bdevs": 4, 00:10:28.470 "num_base_bdevs_discovered": 3, 00:10:28.470 "num_base_bdevs_operational": 4, 00:10:28.470 "base_bdevs_list": [ 00:10:28.470 { 00:10:28.470 "name": "BaseBdev1", 00:10:28.470 "uuid": "91b41cc2-a3c0-4560-8cd0-f446a31d667a", 00:10:28.470 "is_configured": true, 00:10:28.470 "data_offset": 0, 00:10:28.470 "data_size": 65536 00:10:28.470 }, 00:10:28.470 { 00:10:28.470 "name": "BaseBdev2", 00:10:28.470 "uuid": "4c9bf872-6c23-40fb-a003-8a64e078dd26", 00:10:28.470 "is_configured": true, 00:10:28.470 "data_offset": 0, 00:10:28.470 "data_size": 65536 00:10:28.470 }, 00:10:28.470 { 00:10:28.470 "name": "BaseBdev3", 00:10:28.470 "uuid": "8a2decba-4346-4f71-b694-514de6bbe252", 00:10:28.470 "is_configured": true, 00:10:28.470 "data_offset": 0, 00:10:28.470 "data_size": 65536 00:10:28.470 }, 00:10:28.470 { 00:10:28.470 "name": "BaseBdev4", 00:10:28.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.470 "is_configured": false, 00:10:28.470 "data_offset": 0, 00:10:28.470 "data_size": 0 00:10:28.470 } 00:10:28.470 ] 00:10:28.470 }' 00:10:28.470 12:36:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.470 12:36:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.730 [2024-12-14 12:36:28.417575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.730 [2024-12-14 12:36:28.417620] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:28.730 [2024-12-14 12:36:28.417630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:28.730 [2024-12-14 12:36:28.417917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:28.730 [2024-12-14 12:36:28.418099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:28.730 [2024-12-14 12:36:28.418115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:28.730 [2024-12-14 12:36:28.418375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.730 BaseBdev4 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.730 [ 00:10:28.730 { 00:10:28.730 "name": "BaseBdev4", 00:10:28.730 "aliases": [ 00:10:28.730 "a51245b0-a038-4aeb-b6af-d8eb0d5023df" 00:10:28.730 ], 00:10:28.730 "product_name": "Malloc disk", 00:10:28.730 "block_size": 512, 00:10:28.730 "num_blocks": 65536, 00:10:28.730 "uuid": "a51245b0-a038-4aeb-b6af-d8eb0d5023df", 00:10:28.730 "assigned_rate_limits": { 00:10:28.730 "rw_ios_per_sec": 0, 00:10:28.730 "rw_mbytes_per_sec": 0, 00:10:28.730 "r_mbytes_per_sec": 0, 00:10:28.730 "w_mbytes_per_sec": 0 00:10:28.730 }, 00:10:28.730 "claimed": true, 00:10:28.730 "claim_type": "exclusive_write", 00:10:28.730 "zoned": false, 00:10:28.730 "supported_io_types": { 00:10:28.730 "read": true, 00:10:28.730 "write": true, 00:10:28.730 "unmap": true, 00:10:28.730 "flush": true, 00:10:28.730 "reset": true, 00:10:28.730 "nvme_admin": false, 00:10:28.730 "nvme_io": false, 00:10:28.730 "nvme_io_md": false, 00:10:28.730 "write_zeroes": true, 00:10:28.730 "zcopy": true, 00:10:28.730 "get_zone_info": false, 00:10:28.730 "zone_management": false, 00:10:28.730 "zone_append": false, 00:10:28.730 "compare": false, 00:10:28.730 "compare_and_write": false, 00:10:28.730 "abort": true, 00:10:28.730 "seek_hole": false, 00:10:28.730 "seek_data": false, 00:10:28.730 "copy": true, 00:10:28.730 "nvme_iov_md": false 00:10:28.730 }, 00:10:28.730 "memory_domains": [ 00:10:28.730 { 00:10:28.730 "dma_device_id": "system", 00:10:28.730 "dma_device_type": 1 00:10:28.730 }, 00:10:28.730 { 00:10:28.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.730 "dma_device_type": 2 00:10:28.730 } 00:10:28.730 ], 00:10:28.730 "driver_specific": {} 00:10:28.730 } 00:10:28.730 ] 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.730 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.990 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.990 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.990 "name": "Existed_Raid", 00:10:28.990 "uuid": "896a10cf-d356-428a-893e-f32a05a075d3", 00:10:28.990 "strip_size_kb": 64, 00:10:28.990 "state": "online", 00:10:28.990 "raid_level": "raid0", 00:10:28.990 "superblock": false, 00:10:28.990 "num_base_bdevs": 4, 00:10:28.990 "num_base_bdevs_discovered": 4, 00:10:28.990 "num_base_bdevs_operational": 4, 00:10:28.990 "base_bdevs_list": [ 00:10:28.990 { 00:10:28.990 "name": "BaseBdev1", 00:10:28.990 "uuid": "91b41cc2-a3c0-4560-8cd0-f446a31d667a", 00:10:28.990 "is_configured": true, 00:10:28.990 "data_offset": 0, 00:10:28.990 "data_size": 65536 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "name": "BaseBdev2", 00:10:28.990 "uuid": "4c9bf872-6c23-40fb-a003-8a64e078dd26", 00:10:28.990 "is_configured": true, 00:10:28.990 "data_offset": 0, 00:10:28.990 "data_size": 65536 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "name": "BaseBdev3", 00:10:28.990 "uuid": "8a2decba-4346-4f71-b694-514de6bbe252", 00:10:28.990 "is_configured": true, 00:10:28.990 "data_offset": 0, 00:10:28.990 "data_size": 65536 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "name": "BaseBdev4", 00:10:28.990 "uuid": "a51245b0-a038-4aeb-b6af-d8eb0d5023df", 00:10:28.990 "is_configured": true, 00:10:28.990 "data_offset": 0, 00:10:28.990 "data_size": 65536 00:10:28.990 } 00:10:28.990 ] 00:10:28.990 }' 00:10:28.990 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.990 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.250 [2024-12-14 12:36:28.873291] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.250 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:29.250 "name": "Existed_Raid", 00:10:29.250 "aliases": [ 00:10:29.250 "896a10cf-d356-428a-893e-f32a05a075d3" 00:10:29.250 ], 00:10:29.250 "product_name": "Raid Volume", 00:10:29.250 "block_size": 512, 00:10:29.250 "num_blocks": 262144, 00:10:29.250 "uuid": "896a10cf-d356-428a-893e-f32a05a075d3", 00:10:29.250 "assigned_rate_limits": { 00:10:29.250 "rw_ios_per_sec": 0, 00:10:29.250 "rw_mbytes_per_sec": 0, 00:10:29.250 "r_mbytes_per_sec": 0, 00:10:29.250 "w_mbytes_per_sec": 0 00:10:29.250 }, 00:10:29.250 "claimed": false, 00:10:29.250 "zoned": false, 00:10:29.250 "supported_io_types": { 00:10:29.250 "read": true, 00:10:29.250 "write": true, 00:10:29.250 "unmap": true, 00:10:29.250 "flush": true, 00:10:29.250 "reset": true, 00:10:29.250 "nvme_admin": false, 00:10:29.250 "nvme_io": false, 00:10:29.250 "nvme_io_md": false, 00:10:29.250 "write_zeroes": true, 00:10:29.250 "zcopy": false, 00:10:29.250 "get_zone_info": false, 00:10:29.250 "zone_management": false, 00:10:29.250 "zone_append": false, 00:10:29.250 "compare": false, 00:10:29.250 "compare_and_write": false, 00:10:29.250 "abort": false, 00:10:29.250 "seek_hole": false, 00:10:29.250 "seek_data": false, 00:10:29.250 "copy": false, 00:10:29.250 "nvme_iov_md": false 00:10:29.250 }, 00:10:29.250 "memory_domains": [ 00:10:29.250 { 00:10:29.250 "dma_device_id": "system", 00:10:29.250 "dma_device_type": 1 00:10:29.250 }, 00:10:29.250 { 00:10:29.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.250 "dma_device_type": 2 00:10:29.250 }, 00:10:29.250 { 00:10:29.250 "dma_device_id": "system", 00:10:29.250 "dma_device_type": 1 00:10:29.250 }, 00:10:29.250 { 00:10:29.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.250 "dma_device_type": 2 00:10:29.250 }, 00:10:29.250 { 00:10:29.250 "dma_device_id": "system", 00:10:29.250 "dma_device_type": 1 00:10:29.250 }, 00:10:29.250 { 00:10:29.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.250 "dma_device_type": 2 00:10:29.250 }, 00:10:29.250 { 00:10:29.250 "dma_device_id": "system", 00:10:29.250 "dma_device_type": 1 00:10:29.250 }, 00:10:29.250 { 00:10:29.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.250 "dma_device_type": 2 00:10:29.250 } 00:10:29.250 ], 00:10:29.250 "driver_specific": { 00:10:29.250 "raid": { 00:10:29.250 "uuid": "896a10cf-d356-428a-893e-f32a05a075d3", 00:10:29.250 "strip_size_kb": 64, 00:10:29.250 "state": "online", 00:10:29.250 "raid_level": "raid0", 00:10:29.250 "superblock": false, 00:10:29.250 "num_base_bdevs": 4, 00:10:29.250 "num_base_bdevs_discovered": 4, 00:10:29.250 "num_base_bdevs_operational": 4, 00:10:29.250 "base_bdevs_list": [ 00:10:29.250 { 00:10:29.250 "name": "BaseBdev1", 00:10:29.250 "uuid": "91b41cc2-a3c0-4560-8cd0-f446a31d667a", 00:10:29.251 "is_configured": true, 00:10:29.251 "data_offset": 0, 00:10:29.251 "data_size": 65536 00:10:29.251 }, 00:10:29.251 { 00:10:29.251 "name": "BaseBdev2", 00:10:29.251 "uuid": "4c9bf872-6c23-40fb-a003-8a64e078dd26", 00:10:29.251 "is_configured": true, 00:10:29.251 "data_offset": 0, 00:10:29.251 "data_size": 65536 00:10:29.251 }, 00:10:29.251 { 00:10:29.251 "name": "BaseBdev3", 00:10:29.251 "uuid": "8a2decba-4346-4f71-b694-514de6bbe252", 00:10:29.251 "is_configured": true, 00:10:29.251 "data_offset": 0, 00:10:29.251 "data_size": 65536 00:10:29.251 }, 00:10:29.251 { 00:10:29.251 "name": "BaseBdev4", 00:10:29.251 "uuid": "a51245b0-a038-4aeb-b6af-d8eb0d5023df", 00:10:29.251 "is_configured": true, 00:10:29.251 "data_offset": 0, 00:10:29.251 "data_size": 65536 00:10:29.251 } 00:10:29.251 ] 00:10:29.251 } 00:10:29.251 } 00:10:29.251 }' 00:10:29.251 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:29.251 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:29.251 BaseBdev2 00:10:29.251 BaseBdev3 00:10:29.251 BaseBdev4' 00:10:29.251 12:36:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.511 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.511 [2024-12-14 12:36:29.196391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:29.511 [2024-12-14 12:36:29.196424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.511 [2024-12-14 12:36:29.196477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.771 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.771 "name": "Existed_Raid", 00:10:29.771 "uuid": "896a10cf-d356-428a-893e-f32a05a075d3", 00:10:29.771 "strip_size_kb": 64, 00:10:29.771 "state": "offline", 00:10:29.771 "raid_level": "raid0", 00:10:29.771 "superblock": false, 00:10:29.771 "num_base_bdevs": 4, 00:10:29.771 "num_base_bdevs_discovered": 3, 00:10:29.771 "num_base_bdevs_operational": 3, 00:10:29.771 "base_bdevs_list": [ 00:10:29.772 { 00:10:29.772 "name": null, 00:10:29.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.772 "is_configured": false, 00:10:29.772 "data_offset": 0, 00:10:29.772 "data_size": 65536 00:10:29.772 }, 00:10:29.772 { 00:10:29.772 "name": "BaseBdev2", 00:10:29.772 "uuid": "4c9bf872-6c23-40fb-a003-8a64e078dd26", 00:10:29.772 "is_configured": true, 00:10:29.772 "data_offset": 0, 00:10:29.772 "data_size": 65536 00:10:29.772 }, 00:10:29.772 { 00:10:29.772 "name": "BaseBdev3", 00:10:29.772 "uuid": "8a2decba-4346-4f71-b694-514de6bbe252", 00:10:29.772 "is_configured": true, 00:10:29.772 "data_offset": 0, 00:10:29.772 "data_size": 65536 00:10:29.772 }, 00:10:29.772 { 00:10:29.772 "name": "BaseBdev4", 00:10:29.772 "uuid": "a51245b0-a038-4aeb-b6af-d8eb0d5023df", 00:10:29.772 "is_configured": true, 00:10:29.772 "data_offset": 0, 00:10:29.772 "data_size": 65536 00:10:29.772 } 00:10:29.772 ] 00:10:29.772 }' 00:10:29.772 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.772 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.031 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:30.031 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.031 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.031 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.031 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.031 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.291 [2024-12-14 12:36:29.816200] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.291 12:36:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.291 [2024-12-14 12:36:29.975536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.550 [2024-12-14 12:36:30.133378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:30.550 [2024-12-14 12:36:30.133449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.550 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.810 BaseBdev2 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.810 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.810 [ 00:10:30.810 { 00:10:30.810 "name": "BaseBdev2", 00:10:30.810 "aliases": [ 00:10:30.810 "62979361-f741-4f72-abac-85677d165df0" 00:10:30.810 ], 00:10:30.810 "product_name": "Malloc disk", 00:10:30.810 "block_size": 512, 00:10:30.810 "num_blocks": 65536, 00:10:30.810 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:30.810 "assigned_rate_limits": { 00:10:30.810 "rw_ios_per_sec": 0, 00:10:30.810 "rw_mbytes_per_sec": 0, 00:10:30.810 "r_mbytes_per_sec": 0, 00:10:30.810 "w_mbytes_per_sec": 0 00:10:30.810 }, 00:10:30.810 "claimed": false, 00:10:30.810 "zoned": false, 00:10:30.810 "supported_io_types": { 00:10:30.810 "read": true, 00:10:30.810 "write": true, 00:10:30.810 "unmap": true, 00:10:30.810 "flush": true, 00:10:30.810 "reset": true, 00:10:30.810 "nvme_admin": false, 00:10:30.810 "nvme_io": false, 00:10:30.810 "nvme_io_md": false, 00:10:30.810 "write_zeroes": true, 00:10:30.810 "zcopy": true, 00:10:30.810 "get_zone_info": false, 00:10:30.811 "zone_management": false, 00:10:30.811 "zone_append": false, 00:10:30.811 "compare": false, 00:10:30.811 "compare_and_write": false, 00:10:30.811 "abort": true, 00:10:30.811 "seek_hole": false, 00:10:30.811 "seek_data": false, 00:10:30.811 "copy": true, 00:10:30.811 "nvme_iov_md": false 00:10:30.811 }, 00:10:30.811 "memory_domains": [ 00:10:30.811 { 00:10:30.811 "dma_device_id": "system", 00:10:30.811 "dma_device_type": 1 00:10:30.811 }, 00:10:30.811 { 00:10:30.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.811 "dma_device_type": 2 00:10:30.811 } 00:10:30.811 ], 00:10:30.811 "driver_specific": {} 00:10:30.811 } 00:10:30.811 ] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.811 BaseBdev3 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.811 [ 00:10:30.811 { 00:10:30.811 "name": "BaseBdev3", 00:10:30.811 "aliases": [ 00:10:30.811 "215411e2-48db-475d-99cd-83e81577bdc0" 00:10:30.811 ], 00:10:30.811 "product_name": "Malloc disk", 00:10:30.811 "block_size": 512, 00:10:30.811 "num_blocks": 65536, 00:10:30.811 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:30.811 "assigned_rate_limits": { 00:10:30.811 "rw_ios_per_sec": 0, 00:10:30.811 "rw_mbytes_per_sec": 0, 00:10:30.811 "r_mbytes_per_sec": 0, 00:10:30.811 "w_mbytes_per_sec": 0 00:10:30.811 }, 00:10:30.811 "claimed": false, 00:10:30.811 "zoned": false, 00:10:30.811 "supported_io_types": { 00:10:30.811 "read": true, 00:10:30.811 "write": true, 00:10:30.811 "unmap": true, 00:10:30.811 "flush": true, 00:10:30.811 "reset": true, 00:10:30.811 "nvme_admin": false, 00:10:30.811 "nvme_io": false, 00:10:30.811 "nvme_io_md": false, 00:10:30.811 "write_zeroes": true, 00:10:30.811 "zcopy": true, 00:10:30.811 "get_zone_info": false, 00:10:30.811 "zone_management": false, 00:10:30.811 "zone_append": false, 00:10:30.811 "compare": false, 00:10:30.811 "compare_and_write": false, 00:10:30.811 "abort": true, 00:10:30.811 "seek_hole": false, 00:10:30.811 "seek_data": false, 00:10:30.811 "copy": true, 00:10:30.811 "nvme_iov_md": false 00:10:30.811 }, 00:10:30.811 "memory_domains": [ 00:10:30.811 { 00:10:30.811 "dma_device_id": "system", 00:10:30.811 "dma_device_type": 1 00:10:30.811 }, 00:10:30.811 { 00:10:30.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.811 "dma_device_type": 2 00:10:30.811 } 00:10:30.811 ], 00:10:30.811 "driver_specific": {} 00:10:30.811 } 00:10:30.811 ] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.811 BaseBdev4 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.811 [ 00:10:30.811 { 00:10:30.811 "name": "BaseBdev4", 00:10:30.811 "aliases": [ 00:10:30.811 "32d6182f-fcec-464f-b9db-c6cd272607c8" 00:10:30.811 ], 00:10:30.811 "product_name": "Malloc disk", 00:10:30.811 "block_size": 512, 00:10:30.811 "num_blocks": 65536, 00:10:30.811 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:30.811 "assigned_rate_limits": { 00:10:30.811 "rw_ios_per_sec": 0, 00:10:30.811 "rw_mbytes_per_sec": 0, 00:10:30.811 "r_mbytes_per_sec": 0, 00:10:30.811 "w_mbytes_per_sec": 0 00:10:30.811 }, 00:10:30.811 "claimed": false, 00:10:30.811 "zoned": false, 00:10:30.811 "supported_io_types": { 00:10:30.811 "read": true, 00:10:30.811 "write": true, 00:10:30.811 "unmap": true, 00:10:30.811 "flush": true, 00:10:30.811 "reset": true, 00:10:30.811 "nvme_admin": false, 00:10:30.811 "nvme_io": false, 00:10:30.811 "nvme_io_md": false, 00:10:30.811 "write_zeroes": true, 00:10:30.811 "zcopy": true, 00:10:30.811 "get_zone_info": false, 00:10:30.811 "zone_management": false, 00:10:30.811 "zone_append": false, 00:10:30.811 "compare": false, 00:10:30.811 "compare_and_write": false, 00:10:30.811 "abort": true, 00:10:30.811 "seek_hole": false, 00:10:30.811 "seek_data": false, 00:10:30.811 "copy": true, 00:10:30.811 "nvme_iov_md": false 00:10:30.811 }, 00:10:30.811 "memory_domains": [ 00:10:30.811 { 00:10:30.811 "dma_device_id": "system", 00:10:30.811 "dma_device_type": 1 00:10:30.811 }, 00:10:30.811 { 00:10:30.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.811 "dma_device_type": 2 00:10:30.811 } 00:10:30.811 ], 00:10:30.811 "driver_specific": {} 00:10:30.811 } 00:10:30.811 ] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.811 [2024-12-14 12:36:30.536799] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:30.811 [2024-12-14 12:36:30.536906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:30.811 [2024-12-14 12:36:30.536937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:30.811 [2024-12-14 12:36:30.539010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.811 [2024-12-14 12:36:30.539065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.811 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.812 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.071 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.071 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.071 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.071 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.071 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.071 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.071 "name": "Existed_Raid", 00:10:31.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.071 "strip_size_kb": 64, 00:10:31.071 "state": "configuring", 00:10:31.071 "raid_level": "raid0", 00:10:31.071 "superblock": false, 00:10:31.071 "num_base_bdevs": 4, 00:10:31.071 "num_base_bdevs_discovered": 3, 00:10:31.071 "num_base_bdevs_operational": 4, 00:10:31.071 "base_bdevs_list": [ 00:10:31.071 { 00:10:31.071 "name": "BaseBdev1", 00:10:31.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.071 "is_configured": false, 00:10:31.071 "data_offset": 0, 00:10:31.071 "data_size": 0 00:10:31.071 }, 00:10:31.071 { 00:10:31.071 "name": "BaseBdev2", 00:10:31.071 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:31.071 "is_configured": true, 00:10:31.071 "data_offset": 0, 00:10:31.071 "data_size": 65536 00:10:31.071 }, 00:10:31.071 { 00:10:31.071 "name": "BaseBdev3", 00:10:31.071 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:31.071 "is_configured": true, 00:10:31.071 "data_offset": 0, 00:10:31.071 "data_size": 65536 00:10:31.071 }, 00:10:31.071 { 00:10:31.071 "name": "BaseBdev4", 00:10:31.071 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:31.071 "is_configured": true, 00:10:31.071 "data_offset": 0, 00:10:31.071 "data_size": 65536 00:10:31.071 } 00:10:31.071 ] 00:10:31.071 }' 00:10:31.071 12:36:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.071 12:36:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.331 [2024-12-14 12:36:31.016020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.331 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.590 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.590 "name": "Existed_Raid", 00:10:31.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.590 "strip_size_kb": 64, 00:10:31.590 "state": "configuring", 00:10:31.590 "raid_level": "raid0", 00:10:31.590 "superblock": false, 00:10:31.590 "num_base_bdevs": 4, 00:10:31.590 "num_base_bdevs_discovered": 2, 00:10:31.590 "num_base_bdevs_operational": 4, 00:10:31.590 "base_bdevs_list": [ 00:10:31.590 { 00:10:31.590 "name": "BaseBdev1", 00:10:31.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.590 "is_configured": false, 00:10:31.590 "data_offset": 0, 00:10:31.590 "data_size": 0 00:10:31.590 }, 00:10:31.590 { 00:10:31.590 "name": null, 00:10:31.590 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:31.590 "is_configured": false, 00:10:31.590 "data_offset": 0, 00:10:31.590 "data_size": 65536 00:10:31.590 }, 00:10:31.590 { 00:10:31.590 "name": "BaseBdev3", 00:10:31.590 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:31.590 "is_configured": true, 00:10:31.590 "data_offset": 0, 00:10:31.590 "data_size": 65536 00:10:31.590 }, 00:10:31.590 { 00:10:31.590 "name": "BaseBdev4", 00:10:31.590 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:31.590 "is_configured": true, 00:10:31.590 "data_offset": 0, 00:10:31.590 "data_size": 65536 00:10:31.590 } 00:10:31.590 ] 00:10:31.590 }' 00:10:31.590 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.590 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.027 [2024-12-14 12:36:31.556923] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.027 BaseBdev1 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.027 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.028 [ 00:10:32.028 { 00:10:32.028 "name": "BaseBdev1", 00:10:32.028 "aliases": [ 00:10:32.028 "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5" 00:10:32.028 ], 00:10:32.028 "product_name": "Malloc disk", 00:10:32.028 "block_size": 512, 00:10:32.028 "num_blocks": 65536, 00:10:32.028 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:32.028 "assigned_rate_limits": { 00:10:32.028 "rw_ios_per_sec": 0, 00:10:32.028 "rw_mbytes_per_sec": 0, 00:10:32.028 "r_mbytes_per_sec": 0, 00:10:32.028 "w_mbytes_per_sec": 0 00:10:32.028 }, 00:10:32.028 "claimed": true, 00:10:32.028 "claim_type": "exclusive_write", 00:10:32.028 "zoned": false, 00:10:32.028 "supported_io_types": { 00:10:32.028 "read": true, 00:10:32.028 "write": true, 00:10:32.028 "unmap": true, 00:10:32.028 "flush": true, 00:10:32.028 "reset": true, 00:10:32.028 "nvme_admin": false, 00:10:32.028 "nvme_io": false, 00:10:32.028 "nvme_io_md": false, 00:10:32.028 "write_zeroes": true, 00:10:32.028 "zcopy": true, 00:10:32.028 "get_zone_info": false, 00:10:32.028 "zone_management": false, 00:10:32.028 "zone_append": false, 00:10:32.028 "compare": false, 00:10:32.028 "compare_and_write": false, 00:10:32.028 "abort": true, 00:10:32.028 "seek_hole": false, 00:10:32.028 "seek_data": false, 00:10:32.028 "copy": true, 00:10:32.028 "nvme_iov_md": false 00:10:32.028 }, 00:10:32.028 "memory_domains": [ 00:10:32.028 { 00:10:32.028 "dma_device_id": "system", 00:10:32.028 "dma_device_type": 1 00:10:32.028 }, 00:10:32.028 { 00:10:32.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.028 "dma_device_type": 2 00:10:32.028 } 00:10:32.028 ], 00:10:32.028 "driver_specific": {} 00:10:32.028 } 00:10:32.028 ] 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.028 "name": "Existed_Raid", 00:10:32.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.028 "strip_size_kb": 64, 00:10:32.028 "state": "configuring", 00:10:32.028 "raid_level": "raid0", 00:10:32.028 "superblock": false, 00:10:32.028 "num_base_bdevs": 4, 00:10:32.028 "num_base_bdevs_discovered": 3, 00:10:32.028 "num_base_bdevs_operational": 4, 00:10:32.028 "base_bdevs_list": [ 00:10:32.028 { 00:10:32.028 "name": "BaseBdev1", 00:10:32.028 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:32.028 "is_configured": true, 00:10:32.028 "data_offset": 0, 00:10:32.028 "data_size": 65536 00:10:32.028 }, 00:10:32.028 { 00:10:32.028 "name": null, 00:10:32.028 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:32.028 "is_configured": false, 00:10:32.028 "data_offset": 0, 00:10:32.028 "data_size": 65536 00:10:32.028 }, 00:10:32.028 { 00:10:32.028 "name": "BaseBdev3", 00:10:32.028 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:32.028 "is_configured": true, 00:10:32.028 "data_offset": 0, 00:10:32.028 "data_size": 65536 00:10:32.028 }, 00:10:32.028 { 00:10:32.028 "name": "BaseBdev4", 00:10:32.028 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:32.028 "is_configured": true, 00:10:32.028 "data_offset": 0, 00:10:32.028 "data_size": 65536 00:10:32.028 } 00:10:32.028 ] 00:10:32.028 }' 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.028 12:36:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 [2024-12-14 12:36:32.100173] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.610 "name": "Existed_Raid", 00:10:32.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.610 "strip_size_kb": 64, 00:10:32.610 "state": "configuring", 00:10:32.610 "raid_level": "raid0", 00:10:32.610 "superblock": false, 00:10:32.610 "num_base_bdevs": 4, 00:10:32.610 "num_base_bdevs_discovered": 2, 00:10:32.610 "num_base_bdevs_operational": 4, 00:10:32.610 "base_bdevs_list": [ 00:10:32.610 { 00:10:32.610 "name": "BaseBdev1", 00:10:32.610 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:32.610 "is_configured": true, 00:10:32.610 "data_offset": 0, 00:10:32.610 "data_size": 65536 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": null, 00:10:32.610 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:32.610 "is_configured": false, 00:10:32.610 "data_offset": 0, 00:10:32.610 "data_size": 65536 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": null, 00:10:32.610 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:32.610 "is_configured": false, 00:10:32.610 "data_offset": 0, 00:10:32.610 "data_size": 65536 00:10:32.610 }, 00:10:32.610 { 00:10:32.610 "name": "BaseBdev4", 00:10:32.610 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:32.610 "is_configured": true, 00:10:32.610 "data_offset": 0, 00:10:32.610 "data_size": 65536 00:10:32.610 } 00:10:32.610 ] 00:10:32.610 }' 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.610 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.869 [2024-12-14 12:36:32.555330] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.869 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.127 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.127 "name": "Existed_Raid", 00:10:33.127 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.127 "strip_size_kb": 64, 00:10:33.127 "state": "configuring", 00:10:33.127 "raid_level": "raid0", 00:10:33.127 "superblock": false, 00:10:33.127 "num_base_bdevs": 4, 00:10:33.127 "num_base_bdevs_discovered": 3, 00:10:33.127 "num_base_bdevs_operational": 4, 00:10:33.127 "base_bdevs_list": [ 00:10:33.127 { 00:10:33.127 "name": "BaseBdev1", 00:10:33.127 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:33.127 "is_configured": true, 00:10:33.127 "data_offset": 0, 00:10:33.127 "data_size": 65536 00:10:33.127 }, 00:10:33.127 { 00:10:33.127 "name": null, 00:10:33.127 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:33.127 "is_configured": false, 00:10:33.127 "data_offset": 0, 00:10:33.127 "data_size": 65536 00:10:33.127 }, 00:10:33.127 { 00:10:33.127 "name": "BaseBdev3", 00:10:33.127 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:33.127 "is_configured": true, 00:10:33.127 "data_offset": 0, 00:10:33.127 "data_size": 65536 00:10:33.127 }, 00:10:33.127 { 00:10:33.127 "name": "BaseBdev4", 00:10:33.127 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:33.127 "is_configured": true, 00:10:33.127 "data_offset": 0, 00:10:33.127 "data_size": 65536 00:10:33.127 } 00:10:33.127 ] 00:10:33.127 }' 00:10:33.127 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.127 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.386 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.386 12:36:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:33.386 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.386 12:36:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.386 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.386 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:33.386 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:33.386 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.386 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.386 [2024-12-14 12:36:33.050570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.646 "name": "Existed_Raid", 00:10:33.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.646 "strip_size_kb": 64, 00:10:33.646 "state": "configuring", 00:10:33.646 "raid_level": "raid0", 00:10:33.646 "superblock": false, 00:10:33.646 "num_base_bdevs": 4, 00:10:33.646 "num_base_bdevs_discovered": 2, 00:10:33.646 "num_base_bdevs_operational": 4, 00:10:33.646 "base_bdevs_list": [ 00:10:33.646 { 00:10:33.646 "name": null, 00:10:33.646 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:33.646 "is_configured": false, 00:10:33.646 "data_offset": 0, 00:10:33.646 "data_size": 65536 00:10:33.646 }, 00:10:33.646 { 00:10:33.646 "name": null, 00:10:33.646 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:33.646 "is_configured": false, 00:10:33.646 "data_offset": 0, 00:10:33.646 "data_size": 65536 00:10:33.646 }, 00:10:33.646 { 00:10:33.646 "name": "BaseBdev3", 00:10:33.646 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:33.646 "is_configured": true, 00:10:33.646 "data_offset": 0, 00:10:33.646 "data_size": 65536 00:10:33.646 }, 00:10:33.646 { 00:10:33.646 "name": "BaseBdev4", 00:10:33.646 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:33.646 "is_configured": true, 00:10:33.646 "data_offset": 0, 00:10:33.646 "data_size": 65536 00:10:33.646 } 00:10:33.646 ] 00:10:33.646 }' 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.646 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.905 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:33.905 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.905 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.905 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.905 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.165 [2024-12-14 12:36:33.647584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.165 "name": "Existed_Raid", 00:10:34.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.165 "strip_size_kb": 64, 00:10:34.165 "state": "configuring", 00:10:34.165 "raid_level": "raid0", 00:10:34.165 "superblock": false, 00:10:34.165 "num_base_bdevs": 4, 00:10:34.165 "num_base_bdevs_discovered": 3, 00:10:34.165 "num_base_bdevs_operational": 4, 00:10:34.165 "base_bdevs_list": [ 00:10:34.165 { 00:10:34.165 "name": null, 00:10:34.165 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:34.165 "is_configured": false, 00:10:34.165 "data_offset": 0, 00:10:34.165 "data_size": 65536 00:10:34.165 }, 00:10:34.165 { 00:10:34.165 "name": "BaseBdev2", 00:10:34.165 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:34.165 "is_configured": true, 00:10:34.165 "data_offset": 0, 00:10:34.165 "data_size": 65536 00:10:34.165 }, 00:10:34.165 { 00:10:34.165 "name": "BaseBdev3", 00:10:34.165 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:34.165 "is_configured": true, 00:10:34.165 "data_offset": 0, 00:10:34.165 "data_size": 65536 00:10:34.165 }, 00:10:34.165 { 00:10:34.165 "name": "BaseBdev4", 00:10:34.165 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:34.165 "is_configured": true, 00:10:34.165 "data_offset": 0, 00:10:34.165 "data_size": 65536 00:10:34.165 } 00:10:34.165 ] 00:10:34.165 }' 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.165 12:36:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.424 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.684 [2024-12-14 12:36:34.213338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:34.684 [2024-12-14 12:36:34.213512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:34.684 [2024-12-14 12:36:34.213543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:34.684 [2024-12-14 12:36:34.213884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:34.684 [2024-12-14 12:36:34.214104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:34.684 [2024-12-14 12:36:34.214179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:34.684 [2024-12-14 12:36:34.214516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.684 NewBaseBdev 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.684 [ 00:10:34.684 { 00:10:34.684 "name": "NewBaseBdev", 00:10:34.684 "aliases": [ 00:10:34.684 "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5" 00:10:34.684 ], 00:10:34.684 "product_name": "Malloc disk", 00:10:34.684 "block_size": 512, 00:10:34.684 "num_blocks": 65536, 00:10:34.684 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:34.684 "assigned_rate_limits": { 00:10:34.684 "rw_ios_per_sec": 0, 00:10:34.684 "rw_mbytes_per_sec": 0, 00:10:34.684 "r_mbytes_per_sec": 0, 00:10:34.684 "w_mbytes_per_sec": 0 00:10:34.684 }, 00:10:34.684 "claimed": true, 00:10:34.684 "claim_type": "exclusive_write", 00:10:34.684 "zoned": false, 00:10:34.684 "supported_io_types": { 00:10:34.684 "read": true, 00:10:34.684 "write": true, 00:10:34.684 "unmap": true, 00:10:34.684 "flush": true, 00:10:34.684 "reset": true, 00:10:34.684 "nvme_admin": false, 00:10:34.684 "nvme_io": false, 00:10:34.684 "nvme_io_md": false, 00:10:34.684 "write_zeroes": true, 00:10:34.684 "zcopy": true, 00:10:34.684 "get_zone_info": false, 00:10:34.684 "zone_management": false, 00:10:34.684 "zone_append": false, 00:10:34.684 "compare": false, 00:10:34.684 "compare_and_write": false, 00:10:34.684 "abort": true, 00:10:34.684 "seek_hole": false, 00:10:34.684 "seek_data": false, 00:10:34.684 "copy": true, 00:10:34.684 "nvme_iov_md": false 00:10:34.684 }, 00:10:34.684 "memory_domains": [ 00:10:34.684 { 00:10:34.684 "dma_device_id": "system", 00:10:34.684 "dma_device_type": 1 00:10:34.684 }, 00:10:34.684 { 00:10:34.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.684 "dma_device_type": 2 00:10:34.684 } 00:10:34.684 ], 00:10:34.684 "driver_specific": {} 00:10:34.684 } 00:10:34.684 ] 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.684 "name": "Existed_Raid", 00:10:34.684 "uuid": "21f08179-b4f7-481b-92fc-06738a4d116e", 00:10:34.684 "strip_size_kb": 64, 00:10:34.684 "state": "online", 00:10:34.684 "raid_level": "raid0", 00:10:34.684 "superblock": false, 00:10:34.684 "num_base_bdevs": 4, 00:10:34.684 "num_base_bdevs_discovered": 4, 00:10:34.684 "num_base_bdevs_operational": 4, 00:10:34.684 "base_bdevs_list": [ 00:10:34.684 { 00:10:34.684 "name": "NewBaseBdev", 00:10:34.684 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:34.684 "is_configured": true, 00:10:34.684 "data_offset": 0, 00:10:34.684 "data_size": 65536 00:10:34.684 }, 00:10:34.684 { 00:10:34.684 "name": "BaseBdev2", 00:10:34.684 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:34.684 "is_configured": true, 00:10:34.684 "data_offset": 0, 00:10:34.684 "data_size": 65536 00:10:34.684 }, 00:10:34.684 { 00:10:34.684 "name": "BaseBdev3", 00:10:34.684 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:34.684 "is_configured": true, 00:10:34.684 "data_offset": 0, 00:10:34.684 "data_size": 65536 00:10:34.684 }, 00:10:34.684 { 00:10:34.684 "name": "BaseBdev4", 00:10:34.684 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:34.684 "is_configured": true, 00:10:34.684 "data_offset": 0, 00:10:34.684 "data_size": 65536 00:10:34.684 } 00:10:34.684 ] 00:10:34.684 }' 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.684 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.252 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:35.252 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:35.252 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.253 [2024-12-14 12:36:34.740936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:35.253 "name": "Existed_Raid", 00:10:35.253 "aliases": [ 00:10:35.253 "21f08179-b4f7-481b-92fc-06738a4d116e" 00:10:35.253 ], 00:10:35.253 "product_name": "Raid Volume", 00:10:35.253 "block_size": 512, 00:10:35.253 "num_blocks": 262144, 00:10:35.253 "uuid": "21f08179-b4f7-481b-92fc-06738a4d116e", 00:10:35.253 "assigned_rate_limits": { 00:10:35.253 "rw_ios_per_sec": 0, 00:10:35.253 "rw_mbytes_per_sec": 0, 00:10:35.253 "r_mbytes_per_sec": 0, 00:10:35.253 "w_mbytes_per_sec": 0 00:10:35.253 }, 00:10:35.253 "claimed": false, 00:10:35.253 "zoned": false, 00:10:35.253 "supported_io_types": { 00:10:35.253 "read": true, 00:10:35.253 "write": true, 00:10:35.253 "unmap": true, 00:10:35.253 "flush": true, 00:10:35.253 "reset": true, 00:10:35.253 "nvme_admin": false, 00:10:35.253 "nvme_io": false, 00:10:35.253 "nvme_io_md": false, 00:10:35.253 "write_zeroes": true, 00:10:35.253 "zcopy": false, 00:10:35.253 "get_zone_info": false, 00:10:35.253 "zone_management": false, 00:10:35.253 "zone_append": false, 00:10:35.253 "compare": false, 00:10:35.253 "compare_and_write": false, 00:10:35.253 "abort": false, 00:10:35.253 "seek_hole": false, 00:10:35.253 "seek_data": false, 00:10:35.253 "copy": false, 00:10:35.253 "nvme_iov_md": false 00:10:35.253 }, 00:10:35.253 "memory_domains": [ 00:10:35.253 { 00:10:35.253 "dma_device_id": "system", 00:10:35.253 "dma_device_type": 1 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.253 "dma_device_type": 2 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "dma_device_id": "system", 00:10:35.253 "dma_device_type": 1 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.253 "dma_device_type": 2 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "dma_device_id": "system", 00:10:35.253 "dma_device_type": 1 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.253 "dma_device_type": 2 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "dma_device_id": "system", 00:10:35.253 "dma_device_type": 1 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.253 "dma_device_type": 2 00:10:35.253 } 00:10:35.253 ], 00:10:35.253 "driver_specific": { 00:10:35.253 "raid": { 00:10:35.253 "uuid": "21f08179-b4f7-481b-92fc-06738a4d116e", 00:10:35.253 "strip_size_kb": 64, 00:10:35.253 "state": "online", 00:10:35.253 "raid_level": "raid0", 00:10:35.253 "superblock": false, 00:10:35.253 "num_base_bdevs": 4, 00:10:35.253 "num_base_bdevs_discovered": 4, 00:10:35.253 "num_base_bdevs_operational": 4, 00:10:35.253 "base_bdevs_list": [ 00:10:35.253 { 00:10:35.253 "name": "NewBaseBdev", 00:10:35.253 "uuid": "ea7702ca-3a25-45a0-9bd6-6cb61c9b5fd5", 00:10:35.253 "is_configured": true, 00:10:35.253 "data_offset": 0, 00:10:35.253 "data_size": 65536 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "name": "BaseBdev2", 00:10:35.253 "uuid": "62979361-f741-4f72-abac-85677d165df0", 00:10:35.253 "is_configured": true, 00:10:35.253 "data_offset": 0, 00:10:35.253 "data_size": 65536 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "name": "BaseBdev3", 00:10:35.253 "uuid": "215411e2-48db-475d-99cd-83e81577bdc0", 00:10:35.253 "is_configured": true, 00:10:35.253 "data_offset": 0, 00:10:35.253 "data_size": 65536 00:10:35.253 }, 00:10:35.253 { 00:10:35.253 "name": "BaseBdev4", 00:10:35.253 "uuid": "32d6182f-fcec-464f-b9db-c6cd272607c8", 00:10:35.253 "is_configured": true, 00:10:35.253 "data_offset": 0, 00:10:35.253 "data_size": 65536 00:10:35.253 } 00:10:35.253 ] 00:10:35.253 } 00:10:35.253 } 00:10:35.253 }' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:35.253 BaseBdev2 00:10:35.253 BaseBdev3 00:10:35.253 BaseBdev4' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.253 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.514 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.514 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.514 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:35.514 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:35.514 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.514 12:36:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.514 12:36:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.514 [2024-12-14 12:36:35.055978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.514 [2024-12-14 12:36:35.056009] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.514 [2024-12-14 12:36:35.056126] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.514 [2024-12-14 12:36:35.056202] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.514 [2024-12-14 12:36:35.056213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71169 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71169 ']' 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71169 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71169 00:10:35.514 killing process with pid 71169 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71169' 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71169 00:10:35.514 12:36:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71169 00:10:35.514 [2024-12-14 12:36:35.098308] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:36.081 [2024-12-14 12:36:35.518775] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:37.019 ************************************ 00:10:37.019 END TEST raid_state_function_test 00:10:37.019 ************************************ 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:37.019 00:10:37.019 real 0m11.589s 00:10:37.019 user 0m18.387s 00:10:37.019 sys 0m1.993s 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.019 12:36:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:37.019 12:36:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:37.019 12:36:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.019 12:36:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:37.019 ************************************ 00:10:37.019 START TEST raid_state_function_test_sb 00:10:37.019 ************************************ 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:37.019 Process raid pid: 71841 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71841 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71841' 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71841 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71841 ']' 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.019 12:36:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:37.278 [2024-12-14 12:36:36.839352] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:37.278 [2024-12-14 12:36:36.839572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.537 [2024-12-14 12:36:37.018106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.537 [2024-12-14 12:36:37.137483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.796 [2024-12-14 12:36:37.347182] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:37.796 [2024-12-14 12:36:37.347230] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.055 [2024-12-14 12:36:37.705743] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.055 [2024-12-14 12:36:37.705796] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.055 [2024-12-14 12:36:37.705807] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.055 [2024-12-14 12:36:37.705816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.055 [2024-12-14 12:36:37.705822] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.055 [2024-12-14 12:36:37.705831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.055 [2024-12-14 12:36:37.705837] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.055 [2024-12-14 12:36:37.705845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.055 "name": "Existed_Raid", 00:10:38.055 "uuid": "51e085ea-a57e-43cd-81e2-7e207cfea5a2", 00:10:38.055 "strip_size_kb": 64, 00:10:38.055 "state": "configuring", 00:10:38.055 "raid_level": "raid0", 00:10:38.055 "superblock": true, 00:10:38.055 "num_base_bdevs": 4, 00:10:38.055 "num_base_bdevs_discovered": 0, 00:10:38.055 "num_base_bdevs_operational": 4, 00:10:38.055 "base_bdevs_list": [ 00:10:38.055 { 00:10:38.055 "name": "BaseBdev1", 00:10:38.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.055 "is_configured": false, 00:10:38.055 "data_offset": 0, 00:10:38.055 "data_size": 0 00:10:38.055 }, 00:10:38.055 { 00:10:38.055 "name": "BaseBdev2", 00:10:38.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.055 "is_configured": false, 00:10:38.055 "data_offset": 0, 00:10:38.055 "data_size": 0 00:10:38.055 }, 00:10:38.055 { 00:10:38.055 "name": "BaseBdev3", 00:10:38.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.055 "is_configured": false, 00:10:38.055 "data_offset": 0, 00:10:38.055 "data_size": 0 00:10:38.055 }, 00:10:38.055 { 00:10:38.055 "name": "BaseBdev4", 00:10:38.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.055 "is_configured": false, 00:10:38.055 "data_offset": 0, 00:10:38.055 "data_size": 0 00:10:38.055 } 00:10:38.055 ] 00:10:38.055 }' 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.055 12:36:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.624 [2024-12-14 12:36:38.132952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:38.624 [2024-12-14 12:36:38.133049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.624 [2024-12-14 12:36:38.140930] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.624 [2024-12-14 12:36:38.141003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.624 [2024-12-14 12:36:38.141031] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.624 [2024-12-14 12:36:38.141068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.624 [2024-12-14 12:36:38.141087] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.624 [2024-12-14 12:36:38.141124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.624 [2024-12-14 12:36:38.141142] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:38.624 [2024-12-14 12:36:38.141163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.624 [2024-12-14 12:36:38.184586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.624 BaseBdev1 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.624 [ 00:10:38.624 { 00:10:38.624 "name": "BaseBdev1", 00:10:38.624 "aliases": [ 00:10:38.624 "03b520c6-249f-403c-8350-9d1d6138d2aa" 00:10:38.624 ], 00:10:38.624 "product_name": "Malloc disk", 00:10:38.624 "block_size": 512, 00:10:38.624 "num_blocks": 65536, 00:10:38.624 "uuid": "03b520c6-249f-403c-8350-9d1d6138d2aa", 00:10:38.624 "assigned_rate_limits": { 00:10:38.624 "rw_ios_per_sec": 0, 00:10:38.624 "rw_mbytes_per_sec": 0, 00:10:38.624 "r_mbytes_per_sec": 0, 00:10:38.624 "w_mbytes_per_sec": 0 00:10:38.624 }, 00:10:38.624 "claimed": true, 00:10:38.624 "claim_type": "exclusive_write", 00:10:38.624 "zoned": false, 00:10:38.624 "supported_io_types": { 00:10:38.624 "read": true, 00:10:38.624 "write": true, 00:10:38.624 "unmap": true, 00:10:38.624 "flush": true, 00:10:38.624 "reset": true, 00:10:38.624 "nvme_admin": false, 00:10:38.624 "nvme_io": false, 00:10:38.624 "nvme_io_md": false, 00:10:38.624 "write_zeroes": true, 00:10:38.624 "zcopy": true, 00:10:38.624 "get_zone_info": false, 00:10:38.624 "zone_management": false, 00:10:38.624 "zone_append": false, 00:10:38.624 "compare": false, 00:10:38.624 "compare_and_write": false, 00:10:38.624 "abort": true, 00:10:38.624 "seek_hole": false, 00:10:38.624 "seek_data": false, 00:10:38.624 "copy": true, 00:10:38.624 "nvme_iov_md": false 00:10:38.624 }, 00:10:38.624 "memory_domains": [ 00:10:38.624 { 00:10:38.624 "dma_device_id": "system", 00:10:38.624 "dma_device_type": 1 00:10:38.624 }, 00:10:38.624 { 00:10:38.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.624 "dma_device_type": 2 00:10:38.624 } 00:10:38.624 ], 00:10:38.624 "driver_specific": {} 00:10:38.624 } 00:10:38.624 ] 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.624 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.625 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.625 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.625 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.625 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.625 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.625 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.625 "name": "Existed_Raid", 00:10:38.625 "uuid": "3f445d56-fdad-4ad6-86eb-af1968c43554", 00:10:38.625 "strip_size_kb": 64, 00:10:38.625 "state": "configuring", 00:10:38.625 "raid_level": "raid0", 00:10:38.625 "superblock": true, 00:10:38.625 "num_base_bdevs": 4, 00:10:38.625 "num_base_bdevs_discovered": 1, 00:10:38.625 "num_base_bdevs_operational": 4, 00:10:38.625 "base_bdevs_list": [ 00:10:38.625 { 00:10:38.625 "name": "BaseBdev1", 00:10:38.625 "uuid": "03b520c6-249f-403c-8350-9d1d6138d2aa", 00:10:38.625 "is_configured": true, 00:10:38.625 "data_offset": 2048, 00:10:38.625 "data_size": 63488 00:10:38.625 }, 00:10:38.625 { 00:10:38.625 "name": "BaseBdev2", 00:10:38.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.625 "is_configured": false, 00:10:38.625 "data_offset": 0, 00:10:38.625 "data_size": 0 00:10:38.625 }, 00:10:38.625 { 00:10:38.625 "name": "BaseBdev3", 00:10:38.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.625 "is_configured": false, 00:10:38.625 "data_offset": 0, 00:10:38.625 "data_size": 0 00:10:38.625 }, 00:10:38.625 { 00:10:38.625 "name": "BaseBdev4", 00:10:38.625 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.625 "is_configured": false, 00:10:38.625 "data_offset": 0, 00:10:38.625 "data_size": 0 00:10:38.625 } 00:10:38.625 ] 00:10:38.625 }' 00:10:38.625 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.625 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 [2024-12-14 12:36:38.655879] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.193 [2024-12-14 12:36:38.656011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 [2024-12-14 12:36:38.663914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.193 [2024-12-14 12:36:38.666222] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.193 [2024-12-14 12:36:38.666341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.193 [2024-12-14 12:36:38.666398] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.193 [2024-12-14 12:36:38.666443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.193 [2024-12-14 12:36:38.666493] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:39.193 [2024-12-14 12:36:38.666543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.193 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.193 "name": "Existed_Raid", 00:10:39.193 "uuid": "91745c4d-07b2-4edc-b1e3-f8a29d008574", 00:10:39.193 "strip_size_kb": 64, 00:10:39.193 "state": "configuring", 00:10:39.193 "raid_level": "raid0", 00:10:39.193 "superblock": true, 00:10:39.193 "num_base_bdevs": 4, 00:10:39.193 "num_base_bdevs_discovered": 1, 00:10:39.193 "num_base_bdevs_operational": 4, 00:10:39.193 "base_bdevs_list": [ 00:10:39.193 { 00:10:39.193 "name": "BaseBdev1", 00:10:39.193 "uuid": "03b520c6-249f-403c-8350-9d1d6138d2aa", 00:10:39.193 "is_configured": true, 00:10:39.194 "data_offset": 2048, 00:10:39.194 "data_size": 63488 00:10:39.194 }, 00:10:39.194 { 00:10:39.194 "name": "BaseBdev2", 00:10:39.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.194 "is_configured": false, 00:10:39.194 "data_offset": 0, 00:10:39.194 "data_size": 0 00:10:39.194 }, 00:10:39.194 { 00:10:39.194 "name": "BaseBdev3", 00:10:39.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.194 "is_configured": false, 00:10:39.194 "data_offset": 0, 00:10:39.194 "data_size": 0 00:10:39.194 }, 00:10:39.194 { 00:10:39.194 "name": "BaseBdev4", 00:10:39.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.194 "is_configured": false, 00:10:39.194 "data_offset": 0, 00:10:39.194 "data_size": 0 00:10:39.194 } 00:10:39.194 ] 00:10:39.194 }' 00:10:39.194 12:36:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.194 12:36:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.453 [2024-12-14 12:36:39.081019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.453 BaseBdev2 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.453 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.453 [ 00:10:39.453 { 00:10:39.453 "name": "BaseBdev2", 00:10:39.453 "aliases": [ 00:10:39.453 "95c87c10-3b87-490b-82a7-6eb191d49bb6" 00:10:39.454 ], 00:10:39.454 "product_name": "Malloc disk", 00:10:39.454 "block_size": 512, 00:10:39.454 "num_blocks": 65536, 00:10:39.454 "uuid": "95c87c10-3b87-490b-82a7-6eb191d49bb6", 00:10:39.454 "assigned_rate_limits": { 00:10:39.454 "rw_ios_per_sec": 0, 00:10:39.454 "rw_mbytes_per_sec": 0, 00:10:39.454 "r_mbytes_per_sec": 0, 00:10:39.454 "w_mbytes_per_sec": 0 00:10:39.454 }, 00:10:39.454 "claimed": true, 00:10:39.454 "claim_type": "exclusive_write", 00:10:39.454 "zoned": false, 00:10:39.454 "supported_io_types": { 00:10:39.454 "read": true, 00:10:39.454 "write": true, 00:10:39.454 "unmap": true, 00:10:39.454 "flush": true, 00:10:39.454 "reset": true, 00:10:39.454 "nvme_admin": false, 00:10:39.454 "nvme_io": false, 00:10:39.454 "nvme_io_md": false, 00:10:39.454 "write_zeroes": true, 00:10:39.454 "zcopy": true, 00:10:39.454 "get_zone_info": false, 00:10:39.454 "zone_management": false, 00:10:39.454 "zone_append": false, 00:10:39.454 "compare": false, 00:10:39.454 "compare_and_write": false, 00:10:39.454 "abort": true, 00:10:39.454 "seek_hole": false, 00:10:39.454 "seek_data": false, 00:10:39.454 "copy": true, 00:10:39.454 "nvme_iov_md": false 00:10:39.454 }, 00:10:39.454 "memory_domains": [ 00:10:39.454 { 00:10:39.454 "dma_device_id": "system", 00:10:39.454 "dma_device_type": 1 00:10:39.454 }, 00:10:39.454 { 00:10:39.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.454 "dma_device_type": 2 00:10:39.454 } 00:10:39.454 ], 00:10:39.454 "driver_specific": {} 00:10:39.454 } 00:10:39.454 ] 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.454 "name": "Existed_Raid", 00:10:39.454 "uuid": "91745c4d-07b2-4edc-b1e3-f8a29d008574", 00:10:39.454 "strip_size_kb": 64, 00:10:39.454 "state": "configuring", 00:10:39.454 "raid_level": "raid0", 00:10:39.454 "superblock": true, 00:10:39.454 "num_base_bdevs": 4, 00:10:39.454 "num_base_bdevs_discovered": 2, 00:10:39.454 "num_base_bdevs_operational": 4, 00:10:39.454 "base_bdevs_list": [ 00:10:39.454 { 00:10:39.454 "name": "BaseBdev1", 00:10:39.454 "uuid": "03b520c6-249f-403c-8350-9d1d6138d2aa", 00:10:39.454 "is_configured": true, 00:10:39.454 "data_offset": 2048, 00:10:39.454 "data_size": 63488 00:10:39.454 }, 00:10:39.454 { 00:10:39.454 "name": "BaseBdev2", 00:10:39.454 "uuid": "95c87c10-3b87-490b-82a7-6eb191d49bb6", 00:10:39.454 "is_configured": true, 00:10:39.454 "data_offset": 2048, 00:10:39.454 "data_size": 63488 00:10:39.454 }, 00:10:39.454 { 00:10:39.454 "name": "BaseBdev3", 00:10:39.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.454 "is_configured": false, 00:10:39.454 "data_offset": 0, 00:10:39.454 "data_size": 0 00:10:39.454 }, 00:10:39.454 { 00:10:39.454 "name": "BaseBdev4", 00:10:39.454 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.454 "is_configured": false, 00:10:39.454 "data_offset": 0, 00:10:39.454 "data_size": 0 00:10:39.454 } 00:10:39.454 ] 00:10:39.454 }' 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.454 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.024 [2024-12-14 12:36:39.583932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.024 BaseBdev3 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.024 [ 00:10:40.024 { 00:10:40.024 "name": "BaseBdev3", 00:10:40.024 "aliases": [ 00:10:40.024 "ad22affc-250d-4564-adcb-8343757e1471" 00:10:40.024 ], 00:10:40.024 "product_name": "Malloc disk", 00:10:40.024 "block_size": 512, 00:10:40.024 "num_blocks": 65536, 00:10:40.024 "uuid": "ad22affc-250d-4564-adcb-8343757e1471", 00:10:40.024 "assigned_rate_limits": { 00:10:40.024 "rw_ios_per_sec": 0, 00:10:40.024 "rw_mbytes_per_sec": 0, 00:10:40.024 "r_mbytes_per_sec": 0, 00:10:40.024 "w_mbytes_per_sec": 0 00:10:40.024 }, 00:10:40.024 "claimed": true, 00:10:40.024 "claim_type": "exclusive_write", 00:10:40.024 "zoned": false, 00:10:40.024 "supported_io_types": { 00:10:40.024 "read": true, 00:10:40.024 "write": true, 00:10:40.024 "unmap": true, 00:10:40.024 "flush": true, 00:10:40.024 "reset": true, 00:10:40.024 "nvme_admin": false, 00:10:40.024 "nvme_io": false, 00:10:40.024 "nvme_io_md": false, 00:10:40.024 "write_zeroes": true, 00:10:40.024 "zcopy": true, 00:10:40.024 "get_zone_info": false, 00:10:40.024 "zone_management": false, 00:10:40.024 "zone_append": false, 00:10:40.024 "compare": false, 00:10:40.024 "compare_and_write": false, 00:10:40.024 "abort": true, 00:10:40.024 "seek_hole": false, 00:10:40.024 "seek_data": false, 00:10:40.024 "copy": true, 00:10:40.024 "nvme_iov_md": false 00:10:40.024 }, 00:10:40.024 "memory_domains": [ 00:10:40.024 { 00:10:40.024 "dma_device_id": "system", 00:10:40.024 "dma_device_type": 1 00:10:40.024 }, 00:10:40.024 { 00:10:40.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.024 "dma_device_type": 2 00:10:40.024 } 00:10:40.024 ], 00:10:40.024 "driver_specific": {} 00:10:40.024 } 00:10:40.024 ] 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.024 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.025 "name": "Existed_Raid", 00:10:40.025 "uuid": "91745c4d-07b2-4edc-b1e3-f8a29d008574", 00:10:40.025 "strip_size_kb": 64, 00:10:40.025 "state": "configuring", 00:10:40.025 "raid_level": "raid0", 00:10:40.025 "superblock": true, 00:10:40.025 "num_base_bdevs": 4, 00:10:40.025 "num_base_bdevs_discovered": 3, 00:10:40.025 "num_base_bdevs_operational": 4, 00:10:40.025 "base_bdevs_list": [ 00:10:40.025 { 00:10:40.025 "name": "BaseBdev1", 00:10:40.025 "uuid": "03b520c6-249f-403c-8350-9d1d6138d2aa", 00:10:40.025 "is_configured": true, 00:10:40.025 "data_offset": 2048, 00:10:40.025 "data_size": 63488 00:10:40.025 }, 00:10:40.025 { 00:10:40.025 "name": "BaseBdev2", 00:10:40.025 "uuid": "95c87c10-3b87-490b-82a7-6eb191d49bb6", 00:10:40.025 "is_configured": true, 00:10:40.025 "data_offset": 2048, 00:10:40.025 "data_size": 63488 00:10:40.025 }, 00:10:40.025 { 00:10:40.025 "name": "BaseBdev3", 00:10:40.025 "uuid": "ad22affc-250d-4564-adcb-8343757e1471", 00:10:40.025 "is_configured": true, 00:10:40.025 "data_offset": 2048, 00:10:40.025 "data_size": 63488 00:10:40.025 }, 00:10:40.025 { 00:10:40.025 "name": "BaseBdev4", 00:10:40.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.025 "is_configured": false, 00:10:40.025 "data_offset": 0, 00:10:40.025 "data_size": 0 00:10:40.025 } 00:10:40.025 ] 00:10:40.025 }' 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.025 12:36:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.594 [2024-12-14 12:36:40.080607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.594 [2024-12-14 12:36:40.080997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:40.594 [2024-12-14 12:36:40.081072] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:40.594 BaseBdev4 00:10:40.594 [2024-12-14 12:36:40.081372] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:40.594 [2024-12-14 12:36:40.081532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:40.594 [2024-12-14 12:36:40.081584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.594 [2024-12-14 12:36:40.081761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.594 [ 00:10:40.594 { 00:10:40.594 "name": "BaseBdev4", 00:10:40.594 "aliases": [ 00:10:40.594 "48b01b74-2329-4df9-9426-8c148fd2d879" 00:10:40.594 ], 00:10:40.594 "product_name": "Malloc disk", 00:10:40.594 "block_size": 512, 00:10:40.594 "num_blocks": 65536, 00:10:40.594 "uuid": "48b01b74-2329-4df9-9426-8c148fd2d879", 00:10:40.594 "assigned_rate_limits": { 00:10:40.594 "rw_ios_per_sec": 0, 00:10:40.594 "rw_mbytes_per_sec": 0, 00:10:40.594 "r_mbytes_per_sec": 0, 00:10:40.594 "w_mbytes_per_sec": 0 00:10:40.594 }, 00:10:40.594 "claimed": true, 00:10:40.594 "claim_type": "exclusive_write", 00:10:40.594 "zoned": false, 00:10:40.594 "supported_io_types": { 00:10:40.594 "read": true, 00:10:40.594 "write": true, 00:10:40.594 "unmap": true, 00:10:40.594 "flush": true, 00:10:40.594 "reset": true, 00:10:40.594 "nvme_admin": false, 00:10:40.594 "nvme_io": false, 00:10:40.594 "nvme_io_md": false, 00:10:40.594 "write_zeroes": true, 00:10:40.594 "zcopy": true, 00:10:40.594 "get_zone_info": false, 00:10:40.594 "zone_management": false, 00:10:40.594 "zone_append": false, 00:10:40.594 "compare": false, 00:10:40.594 "compare_and_write": false, 00:10:40.594 "abort": true, 00:10:40.594 "seek_hole": false, 00:10:40.594 "seek_data": false, 00:10:40.594 "copy": true, 00:10:40.594 "nvme_iov_md": false 00:10:40.594 }, 00:10:40.594 "memory_domains": [ 00:10:40.594 { 00:10:40.594 "dma_device_id": "system", 00:10:40.594 "dma_device_type": 1 00:10:40.594 }, 00:10:40.594 { 00:10:40.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.594 "dma_device_type": 2 00:10:40.594 } 00:10:40.594 ], 00:10:40.594 "driver_specific": {} 00:10:40.594 } 00:10:40.594 ] 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.594 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.595 "name": "Existed_Raid", 00:10:40.595 "uuid": "91745c4d-07b2-4edc-b1e3-f8a29d008574", 00:10:40.595 "strip_size_kb": 64, 00:10:40.595 "state": "online", 00:10:40.595 "raid_level": "raid0", 00:10:40.595 "superblock": true, 00:10:40.595 "num_base_bdevs": 4, 00:10:40.595 "num_base_bdevs_discovered": 4, 00:10:40.595 "num_base_bdevs_operational": 4, 00:10:40.595 "base_bdevs_list": [ 00:10:40.595 { 00:10:40.595 "name": "BaseBdev1", 00:10:40.595 "uuid": "03b520c6-249f-403c-8350-9d1d6138d2aa", 00:10:40.595 "is_configured": true, 00:10:40.595 "data_offset": 2048, 00:10:40.595 "data_size": 63488 00:10:40.595 }, 00:10:40.595 { 00:10:40.595 "name": "BaseBdev2", 00:10:40.595 "uuid": "95c87c10-3b87-490b-82a7-6eb191d49bb6", 00:10:40.595 "is_configured": true, 00:10:40.595 "data_offset": 2048, 00:10:40.595 "data_size": 63488 00:10:40.595 }, 00:10:40.595 { 00:10:40.595 "name": "BaseBdev3", 00:10:40.595 "uuid": "ad22affc-250d-4564-adcb-8343757e1471", 00:10:40.595 "is_configured": true, 00:10:40.595 "data_offset": 2048, 00:10:40.595 "data_size": 63488 00:10:40.595 }, 00:10:40.595 { 00:10:40.595 "name": "BaseBdev4", 00:10:40.595 "uuid": "48b01b74-2329-4df9-9426-8c148fd2d879", 00:10:40.595 "is_configured": true, 00:10:40.595 "data_offset": 2048, 00:10:40.595 "data_size": 63488 00:10:40.595 } 00:10:40.595 ] 00:10:40.595 }' 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.595 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.854 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.854 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.854 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.854 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.854 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.855 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.855 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.855 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.855 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.855 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.855 [2024-12-14 12:36:40.544294] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.855 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.855 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.855 "name": "Existed_Raid", 00:10:40.855 "aliases": [ 00:10:40.855 "91745c4d-07b2-4edc-b1e3-f8a29d008574" 00:10:40.855 ], 00:10:40.855 "product_name": "Raid Volume", 00:10:40.855 "block_size": 512, 00:10:40.855 "num_blocks": 253952, 00:10:40.855 "uuid": "91745c4d-07b2-4edc-b1e3-f8a29d008574", 00:10:40.855 "assigned_rate_limits": { 00:10:40.855 "rw_ios_per_sec": 0, 00:10:40.855 "rw_mbytes_per_sec": 0, 00:10:40.855 "r_mbytes_per_sec": 0, 00:10:40.855 "w_mbytes_per_sec": 0 00:10:40.855 }, 00:10:40.855 "claimed": false, 00:10:40.855 "zoned": false, 00:10:40.855 "supported_io_types": { 00:10:40.855 "read": true, 00:10:40.855 "write": true, 00:10:40.855 "unmap": true, 00:10:40.855 "flush": true, 00:10:40.855 "reset": true, 00:10:40.855 "nvme_admin": false, 00:10:40.855 "nvme_io": false, 00:10:40.855 "nvme_io_md": false, 00:10:40.855 "write_zeroes": true, 00:10:40.855 "zcopy": false, 00:10:40.855 "get_zone_info": false, 00:10:40.855 "zone_management": false, 00:10:40.855 "zone_append": false, 00:10:40.855 "compare": false, 00:10:40.855 "compare_and_write": false, 00:10:40.855 "abort": false, 00:10:40.855 "seek_hole": false, 00:10:40.855 "seek_data": false, 00:10:40.855 "copy": false, 00:10:40.855 "nvme_iov_md": false 00:10:40.855 }, 00:10:40.855 "memory_domains": [ 00:10:40.855 { 00:10:40.855 "dma_device_id": "system", 00:10:40.855 "dma_device_type": 1 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.855 "dma_device_type": 2 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "dma_device_id": "system", 00:10:40.855 "dma_device_type": 1 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.855 "dma_device_type": 2 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "dma_device_id": "system", 00:10:40.855 "dma_device_type": 1 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.855 "dma_device_type": 2 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "dma_device_id": "system", 00:10:40.855 "dma_device_type": 1 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.855 "dma_device_type": 2 00:10:40.855 } 00:10:40.855 ], 00:10:40.855 "driver_specific": { 00:10:40.855 "raid": { 00:10:40.855 "uuid": "91745c4d-07b2-4edc-b1e3-f8a29d008574", 00:10:40.855 "strip_size_kb": 64, 00:10:40.855 "state": "online", 00:10:40.855 "raid_level": "raid0", 00:10:40.855 "superblock": true, 00:10:40.855 "num_base_bdevs": 4, 00:10:40.855 "num_base_bdevs_discovered": 4, 00:10:40.855 "num_base_bdevs_operational": 4, 00:10:40.855 "base_bdevs_list": [ 00:10:40.855 { 00:10:40.855 "name": "BaseBdev1", 00:10:40.855 "uuid": "03b520c6-249f-403c-8350-9d1d6138d2aa", 00:10:40.855 "is_configured": true, 00:10:40.855 "data_offset": 2048, 00:10:40.855 "data_size": 63488 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "name": "BaseBdev2", 00:10:40.855 "uuid": "95c87c10-3b87-490b-82a7-6eb191d49bb6", 00:10:40.855 "is_configured": true, 00:10:40.855 "data_offset": 2048, 00:10:40.855 "data_size": 63488 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "name": "BaseBdev3", 00:10:40.855 "uuid": "ad22affc-250d-4564-adcb-8343757e1471", 00:10:40.855 "is_configured": true, 00:10:40.855 "data_offset": 2048, 00:10:40.855 "data_size": 63488 00:10:40.855 }, 00:10:40.855 { 00:10:40.855 "name": "BaseBdev4", 00:10:40.855 "uuid": "48b01b74-2329-4df9-9426-8c148fd2d879", 00:10:40.855 "is_configured": true, 00:10:40.855 "data_offset": 2048, 00:10:40.855 "data_size": 63488 00:10:40.855 } 00:10:40.855 ] 00:10:40.855 } 00:10:40.855 } 00:10:40.855 }' 00:10:40.855 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:41.115 BaseBdev2 00:10:41.115 BaseBdev3 00:10:41.115 BaseBdev4' 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.115 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.116 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.116 [2024-12-14 12:36:40.823481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.116 [2024-12-14 12:36:40.823515] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:41.116 [2024-12-14 12:36:40.823572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.381 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.381 "name": "Existed_Raid", 00:10:41.381 "uuid": "91745c4d-07b2-4edc-b1e3-f8a29d008574", 00:10:41.381 "strip_size_kb": 64, 00:10:41.381 "state": "offline", 00:10:41.381 "raid_level": "raid0", 00:10:41.381 "superblock": true, 00:10:41.381 "num_base_bdevs": 4, 00:10:41.381 "num_base_bdevs_discovered": 3, 00:10:41.381 "num_base_bdevs_operational": 3, 00:10:41.381 "base_bdevs_list": [ 00:10:41.381 { 00:10:41.381 "name": null, 00:10:41.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.381 "is_configured": false, 00:10:41.381 "data_offset": 0, 00:10:41.381 "data_size": 63488 00:10:41.381 }, 00:10:41.381 { 00:10:41.381 "name": "BaseBdev2", 00:10:41.381 "uuid": "95c87c10-3b87-490b-82a7-6eb191d49bb6", 00:10:41.381 "is_configured": true, 00:10:41.381 "data_offset": 2048, 00:10:41.381 "data_size": 63488 00:10:41.381 }, 00:10:41.381 { 00:10:41.381 "name": "BaseBdev3", 00:10:41.381 "uuid": "ad22affc-250d-4564-adcb-8343757e1471", 00:10:41.381 "is_configured": true, 00:10:41.381 "data_offset": 2048, 00:10:41.381 "data_size": 63488 00:10:41.381 }, 00:10:41.381 { 00:10:41.381 "name": "BaseBdev4", 00:10:41.381 "uuid": "48b01b74-2329-4df9-9426-8c148fd2d879", 00:10:41.381 "is_configured": true, 00:10:41.381 "data_offset": 2048, 00:10:41.381 "data_size": 63488 00:10:41.381 } 00:10:41.382 ] 00:10:41.382 }' 00:10:41.382 12:36:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.382 12:36:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.658 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:41.658 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.658 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.658 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.658 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.658 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.658 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.917 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.917 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.917 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:41.917 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.917 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.917 [2024-12-14 12:36:41.420996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:41.917 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.917 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.918 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.918 [2024-12-14 12:36:41.574277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.177 [2024-12-14 12:36:41.733394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:42.177 [2024-12-14 12:36:41.733447] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.177 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.437 BaseBdev2 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.437 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.437 [ 00:10:42.437 { 00:10:42.437 "name": "BaseBdev2", 00:10:42.437 "aliases": [ 00:10:42.437 "b5fbb623-a716-4d6b-b76a-551e12b31606" 00:10:42.437 ], 00:10:42.437 "product_name": "Malloc disk", 00:10:42.437 "block_size": 512, 00:10:42.437 "num_blocks": 65536, 00:10:42.437 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:42.437 "assigned_rate_limits": { 00:10:42.438 "rw_ios_per_sec": 0, 00:10:42.438 "rw_mbytes_per_sec": 0, 00:10:42.438 "r_mbytes_per_sec": 0, 00:10:42.438 "w_mbytes_per_sec": 0 00:10:42.438 }, 00:10:42.438 "claimed": false, 00:10:42.438 "zoned": false, 00:10:42.438 "supported_io_types": { 00:10:42.438 "read": true, 00:10:42.438 "write": true, 00:10:42.438 "unmap": true, 00:10:42.438 "flush": true, 00:10:42.438 "reset": true, 00:10:42.438 "nvme_admin": false, 00:10:42.438 "nvme_io": false, 00:10:42.438 "nvme_io_md": false, 00:10:42.438 "write_zeroes": true, 00:10:42.438 "zcopy": true, 00:10:42.438 "get_zone_info": false, 00:10:42.438 "zone_management": false, 00:10:42.438 "zone_append": false, 00:10:42.438 "compare": false, 00:10:42.438 "compare_and_write": false, 00:10:42.438 "abort": true, 00:10:42.438 "seek_hole": false, 00:10:42.438 "seek_data": false, 00:10:42.438 "copy": true, 00:10:42.438 "nvme_iov_md": false 00:10:42.438 }, 00:10:42.438 "memory_domains": [ 00:10:42.438 { 00:10:42.438 "dma_device_id": "system", 00:10:42.438 "dma_device_type": 1 00:10:42.438 }, 00:10:42.438 { 00:10:42.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.438 "dma_device_type": 2 00:10:42.438 } 00:10:42.438 ], 00:10:42.438 "driver_specific": {} 00:10:42.438 } 00:10:42.438 ] 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.438 BaseBdev3 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.438 12:36:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.438 [ 00:10:42.438 { 00:10:42.438 "name": "BaseBdev3", 00:10:42.438 "aliases": [ 00:10:42.438 "3234d6eb-058d-4f81-a992-d7e5d59d9f4d" 00:10:42.438 ], 00:10:42.438 "product_name": "Malloc disk", 00:10:42.438 "block_size": 512, 00:10:42.438 "num_blocks": 65536, 00:10:42.438 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:42.438 "assigned_rate_limits": { 00:10:42.438 "rw_ios_per_sec": 0, 00:10:42.438 "rw_mbytes_per_sec": 0, 00:10:42.438 "r_mbytes_per_sec": 0, 00:10:42.438 "w_mbytes_per_sec": 0 00:10:42.438 }, 00:10:42.438 "claimed": false, 00:10:42.438 "zoned": false, 00:10:42.438 "supported_io_types": { 00:10:42.438 "read": true, 00:10:42.438 "write": true, 00:10:42.438 "unmap": true, 00:10:42.438 "flush": true, 00:10:42.438 "reset": true, 00:10:42.438 "nvme_admin": false, 00:10:42.438 "nvme_io": false, 00:10:42.438 "nvme_io_md": false, 00:10:42.438 "write_zeroes": true, 00:10:42.438 "zcopy": true, 00:10:42.438 "get_zone_info": false, 00:10:42.438 "zone_management": false, 00:10:42.438 "zone_append": false, 00:10:42.438 "compare": false, 00:10:42.438 "compare_and_write": false, 00:10:42.438 "abort": true, 00:10:42.438 "seek_hole": false, 00:10:42.438 "seek_data": false, 00:10:42.438 "copy": true, 00:10:42.438 "nvme_iov_md": false 00:10:42.438 }, 00:10:42.438 "memory_domains": [ 00:10:42.438 { 00:10:42.438 "dma_device_id": "system", 00:10:42.438 "dma_device_type": 1 00:10:42.438 }, 00:10:42.438 { 00:10:42.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.438 "dma_device_type": 2 00:10:42.438 } 00:10:42.438 ], 00:10:42.438 "driver_specific": {} 00:10:42.438 } 00:10:42.438 ] 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.438 BaseBdev4 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.438 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.438 [ 00:10:42.438 { 00:10:42.438 "name": "BaseBdev4", 00:10:42.438 "aliases": [ 00:10:42.438 "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08" 00:10:42.438 ], 00:10:42.438 "product_name": "Malloc disk", 00:10:42.438 "block_size": 512, 00:10:42.438 "num_blocks": 65536, 00:10:42.438 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:42.438 "assigned_rate_limits": { 00:10:42.438 "rw_ios_per_sec": 0, 00:10:42.438 "rw_mbytes_per_sec": 0, 00:10:42.438 "r_mbytes_per_sec": 0, 00:10:42.438 "w_mbytes_per_sec": 0 00:10:42.438 }, 00:10:42.438 "claimed": false, 00:10:42.438 "zoned": false, 00:10:42.438 "supported_io_types": { 00:10:42.438 "read": true, 00:10:42.438 "write": true, 00:10:42.438 "unmap": true, 00:10:42.438 "flush": true, 00:10:42.438 "reset": true, 00:10:42.438 "nvme_admin": false, 00:10:42.438 "nvme_io": false, 00:10:42.438 "nvme_io_md": false, 00:10:42.438 "write_zeroes": true, 00:10:42.438 "zcopy": true, 00:10:42.438 "get_zone_info": false, 00:10:42.438 "zone_management": false, 00:10:42.438 "zone_append": false, 00:10:42.438 "compare": false, 00:10:42.438 "compare_and_write": false, 00:10:42.438 "abort": true, 00:10:42.438 "seek_hole": false, 00:10:42.438 "seek_data": false, 00:10:42.438 "copy": true, 00:10:42.438 "nvme_iov_md": false 00:10:42.438 }, 00:10:42.438 "memory_domains": [ 00:10:42.438 { 00:10:42.438 "dma_device_id": "system", 00:10:42.438 "dma_device_type": 1 00:10:42.439 }, 00:10:42.439 { 00:10:42.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.439 "dma_device_type": 2 00:10:42.439 } 00:10:42.439 ], 00:10:42.439 "driver_specific": {} 00:10:42.439 } 00:10:42.439 ] 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.439 [2024-12-14 12:36:42.113628] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:42.439 [2024-12-14 12:36:42.113706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:42.439 [2024-12-14 12:36:42.113748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.439 [2024-12-14 12:36:42.115582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:42.439 [2024-12-14 12:36:42.115698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.439 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.439 "name": "Existed_Raid", 00:10:42.439 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:42.439 "strip_size_kb": 64, 00:10:42.439 "state": "configuring", 00:10:42.439 "raid_level": "raid0", 00:10:42.439 "superblock": true, 00:10:42.439 "num_base_bdevs": 4, 00:10:42.439 "num_base_bdevs_discovered": 3, 00:10:42.439 "num_base_bdevs_operational": 4, 00:10:42.439 "base_bdevs_list": [ 00:10:42.439 { 00:10:42.439 "name": "BaseBdev1", 00:10:42.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.439 "is_configured": false, 00:10:42.439 "data_offset": 0, 00:10:42.439 "data_size": 0 00:10:42.439 }, 00:10:42.439 { 00:10:42.439 "name": "BaseBdev2", 00:10:42.439 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:42.439 "is_configured": true, 00:10:42.439 "data_offset": 2048, 00:10:42.439 "data_size": 63488 00:10:42.439 }, 00:10:42.439 { 00:10:42.439 "name": "BaseBdev3", 00:10:42.439 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:42.439 "is_configured": true, 00:10:42.439 "data_offset": 2048, 00:10:42.439 "data_size": 63488 00:10:42.439 }, 00:10:42.439 { 00:10:42.439 "name": "BaseBdev4", 00:10:42.439 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:42.439 "is_configured": true, 00:10:42.439 "data_offset": 2048, 00:10:42.439 "data_size": 63488 00:10:42.439 } 00:10:42.439 ] 00:10:42.439 }' 00:10:42.699 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.699 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.958 [2024-12-14 12:36:42.552934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.958 "name": "Existed_Raid", 00:10:42.958 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:42.958 "strip_size_kb": 64, 00:10:42.958 "state": "configuring", 00:10:42.958 "raid_level": "raid0", 00:10:42.958 "superblock": true, 00:10:42.958 "num_base_bdevs": 4, 00:10:42.958 "num_base_bdevs_discovered": 2, 00:10:42.958 "num_base_bdevs_operational": 4, 00:10:42.958 "base_bdevs_list": [ 00:10:42.958 { 00:10:42.958 "name": "BaseBdev1", 00:10:42.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.958 "is_configured": false, 00:10:42.958 "data_offset": 0, 00:10:42.958 "data_size": 0 00:10:42.958 }, 00:10:42.958 { 00:10:42.958 "name": null, 00:10:42.958 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:42.958 "is_configured": false, 00:10:42.958 "data_offset": 0, 00:10:42.958 "data_size": 63488 00:10:42.958 }, 00:10:42.958 { 00:10:42.958 "name": "BaseBdev3", 00:10:42.958 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:42.958 "is_configured": true, 00:10:42.958 "data_offset": 2048, 00:10:42.958 "data_size": 63488 00:10:42.958 }, 00:10:42.958 { 00:10:42.958 "name": "BaseBdev4", 00:10:42.958 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:42.958 "is_configured": true, 00:10:42.958 "data_offset": 2048, 00:10:42.958 "data_size": 63488 00:10:42.958 } 00:10:42.958 ] 00:10:42.958 }' 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.958 12:36:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.527 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.528 [2024-12-14 12:36:43.105107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:43.528 BaseBdev1 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.528 [ 00:10:43.528 { 00:10:43.528 "name": "BaseBdev1", 00:10:43.528 "aliases": [ 00:10:43.528 "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1" 00:10:43.528 ], 00:10:43.528 "product_name": "Malloc disk", 00:10:43.528 "block_size": 512, 00:10:43.528 "num_blocks": 65536, 00:10:43.528 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:43.528 "assigned_rate_limits": { 00:10:43.528 "rw_ios_per_sec": 0, 00:10:43.528 "rw_mbytes_per_sec": 0, 00:10:43.528 "r_mbytes_per_sec": 0, 00:10:43.528 "w_mbytes_per_sec": 0 00:10:43.528 }, 00:10:43.528 "claimed": true, 00:10:43.528 "claim_type": "exclusive_write", 00:10:43.528 "zoned": false, 00:10:43.528 "supported_io_types": { 00:10:43.528 "read": true, 00:10:43.528 "write": true, 00:10:43.528 "unmap": true, 00:10:43.528 "flush": true, 00:10:43.528 "reset": true, 00:10:43.528 "nvme_admin": false, 00:10:43.528 "nvme_io": false, 00:10:43.528 "nvme_io_md": false, 00:10:43.528 "write_zeroes": true, 00:10:43.528 "zcopy": true, 00:10:43.528 "get_zone_info": false, 00:10:43.528 "zone_management": false, 00:10:43.528 "zone_append": false, 00:10:43.528 "compare": false, 00:10:43.528 "compare_and_write": false, 00:10:43.528 "abort": true, 00:10:43.528 "seek_hole": false, 00:10:43.528 "seek_data": false, 00:10:43.528 "copy": true, 00:10:43.528 "nvme_iov_md": false 00:10:43.528 }, 00:10:43.528 "memory_domains": [ 00:10:43.528 { 00:10:43.528 "dma_device_id": "system", 00:10:43.528 "dma_device_type": 1 00:10:43.528 }, 00:10:43.528 { 00:10:43.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.528 "dma_device_type": 2 00:10:43.528 } 00:10:43.528 ], 00:10:43.528 "driver_specific": {} 00:10:43.528 } 00:10:43.528 ] 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.528 "name": "Existed_Raid", 00:10:43.528 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:43.528 "strip_size_kb": 64, 00:10:43.528 "state": "configuring", 00:10:43.528 "raid_level": "raid0", 00:10:43.528 "superblock": true, 00:10:43.528 "num_base_bdevs": 4, 00:10:43.528 "num_base_bdevs_discovered": 3, 00:10:43.528 "num_base_bdevs_operational": 4, 00:10:43.528 "base_bdevs_list": [ 00:10:43.528 { 00:10:43.528 "name": "BaseBdev1", 00:10:43.528 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:43.528 "is_configured": true, 00:10:43.528 "data_offset": 2048, 00:10:43.528 "data_size": 63488 00:10:43.528 }, 00:10:43.528 { 00:10:43.528 "name": null, 00:10:43.528 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:43.528 "is_configured": false, 00:10:43.528 "data_offset": 0, 00:10:43.528 "data_size": 63488 00:10:43.528 }, 00:10:43.528 { 00:10:43.528 "name": "BaseBdev3", 00:10:43.528 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:43.528 "is_configured": true, 00:10:43.528 "data_offset": 2048, 00:10:43.528 "data_size": 63488 00:10:43.528 }, 00:10:43.528 { 00:10:43.528 "name": "BaseBdev4", 00:10:43.528 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:43.528 "is_configured": true, 00:10:43.528 "data_offset": 2048, 00:10:43.528 "data_size": 63488 00:10:43.528 } 00:10:43.528 ] 00:10:43.528 }' 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.528 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.097 [2024-12-14 12:36:43.652216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.097 "name": "Existed_Raid", 00:10:44.097 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:44.097 "strip_size_kb": 64, 00:10:44.097 "state": "configuring", 00:10:44.097 "raid_level": "raid0", 00:10:44.097 "superblock": true, 00:10:44.097 "num_base_bdevs": 4, 00:10:44.097 "num_base_bdevs_discovered": 2, 00:10:44.097 "num_base_bdevs_operational": 4, 00:10:44.097 "base_bdevs_list": [ 00:10:44.097 { 00:10:44.097 "name": "BaseBdev1", 00:10:44.097 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:44.097 "is_configured": true, 00:10:44.097 "data_offset": 2048, 00:10:44.097 "data_size": 63488 00:10:44.097 }, 00:10:44.097 { 00:10:44.097 "name": null, 00:10:44.097 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:44.097 "is_configured": false, 00:10:44.097 "data_offset": 0, 00:10:44.097 "data_size": 63488 00:10:44.097 }, 00:10:44.097 { 00:10:44.097 "name": null, 00:10:44.097 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:44.097 "is_configured": false, 00:10:44.097 "data_offset": 0, 00:10:44.097 "data_size": 63488 00:10:44.097 }, 00:10:44.097 { 00:10:44.097 "name": "BaseBdev4", 00:10:44.097 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:44.097 "is_configured": true, 00:10:44.097 "data_offset": 2048, 00:10:44.097 "data_size": 63488 00:10:44.097 } 00:10:44.097 ] 00:10:44.097 }' 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.097 12:36:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.357 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.357 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.357 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.357 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.616 [2024-12-14 12:36:44.135375] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.616 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.616 "name": "Existed_Raid", 00:10:44.616 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:44.616 "strip_size_kb": 64, 00:10:44.616 "state": "configuring", 00:10:44.616 "raid_level": "raid0", 00:10:44.616 "superblock": true, 00:10:44.616 "num_base_bdevs": 4, 00:10:44.616 "num_base_bdevs_discovered": 3, 00:10:44.616 "num_base_bdevs_operational": 4, 00:10:44.616 "base_bdevs_list": [ 00:10:44.616 { 00:10:44.616 "name": "BaseBdev1", 00:10:44.616 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:44.616 "is_configured": true, 00:10:44.616 "data_offset": 2048, 00:10:44.616 "data_size": 63488 00:10:44.616 }, 00:10:44.616 { 00:10:44.616 "name": null, 00:10:44.616 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:44.616 "is_configured": false, 00:10:44.616 "data_offset": 0, 00:10:44.616 "data_size": 63488 00:10:44.616 }, 00:10:44.616 { 00:10:44.617 "name": "BaseBdev3", 00:10:44.617 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:44.617 "is_configured": true, 00:10:44.617 "data_offset": 2048, 00:10:44.617 "data_size": 63488 00:10:44.617 }, 00:10:44.617 { 00:10:44.617 "name": "BaseBdev4", 00:10:44.617 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:44.617 "is_configured": true, 00:10:44.617 "data_offset": 2048, 00:10:44.617 "data_size": 63488 00:10:44.617 } 00:10:44.617 ] 00:10:44.617 }' 00:10:44.617 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.617 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.876 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.137 [2024-12-14 12:36:44.614608] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.137 "name": "Existed_Raid", 00:10:45.137 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:45.137 "strip_size_kb": 64, 00:10:45.137 "state": "configuring", 00:10:45.137 "raid_level": "raid0", 00:10:45.137 "superblock": true, 00:10:45.137 "num_base_bdevs": 4, 00:10:45.137 "num_base_bdevs_discovered": 2, 00:10:45.137 "num_base_bdevs_operational": 4, 00:10:45.137 "base_bdevs_list": [ 00:10:45.137 { 00:10:45.137 "name": null, 00:10:45.137 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:45.137 "is_configured": false, 00:10:45.137 "data_offset": 0, 00:10:45.137 "data_size": 63488 00:10:45.137 }, 00:10:45.137 { 00:10:45.137 "name": null, 00:10:45.137 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:45.137 "is_configured": false, 00:10:45.137 "data_offset": 0, 00:10:45.137 "data_size": 63488 00:10:45.137 }, 00:10:45.137 { 00:10:45.137 "name": "BaseBdev3", 00:10:45.137 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:45.137 "is_configured": true, 00:10:45.137 "data_offset": 2048, 00:10:45.137 "data_size": 63488 00:10:45.137 }, 00:10:45.137 { 00:10:45.137 "name": "BaseBdev4", 00:10:45.137 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:45.137 "is_configured": true, 00:10:45.137 "data_offset": 2048, 00:10:45.137 "data_size": 63488 00:10:45.137 } 00:10:45.137 ] 00:10:45.137 }' 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.137 12:36:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.704 [2024-12-14 12:36:45.219558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.704 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.704 "name": "Existed_Raid", 00:10:45.704 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:45.704 "strip_size_kb": 64, 00:10:45.704 "state": "configuring", 00:10:45.704 "raid_level": "raid0", 00:10:45.704 "superblock": true, 00:10:45.704 "num_base_bdevs": 4, 00:10:45.704 "num_base_bdevs_discovered": 3, 00:10:45.704 "num_base_bdevs_operational": 4, 00:10:45.704 "base_bdevs_list": [ 00:10:45.704 { 00:10:45.704 "name": null, 00:10:45.704 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:45.704 "is_configured": false, 00:10:45.704 "data_offset": 0, 00:10:45.704 "data_size": 63488 00:10:45.704 }, 00:10:45.705 { 00:10:45.705 "name": "BaseBdev2", 00:10:45.705 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:45.705 "is_configured": true, 00:10:45.705 "data_offset": 2048, 00:10:45.705 "data_size": 63488 00:10:45.705 }, 00:10:45.705 { 00:10:45.705 "name": "BaseBdev3", 00:10:45.705 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:45.705 "is_configured": true, 00:10:45.705 "data_offset": 2048, 00:10:45.705 "data_size": 63488 00:10:45.705 }, 00:10:45.705 { 00:10:45.705 "name": "BaseBdev4", 00:10:45.705 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:45.705 "is_configured": true, 00:10:45.705 "data_offset": 2048, 00:10:45.705 "data_size": 63488 00:10:45.705 } 00:10:45.705 ] 00:10:45.705 }' 00:10:45.705 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.705 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.963 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.223 [2024-12-14 12:36:45.708079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:46.223 [2024-12-14 12:36:45.708424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:46.223 [2024-12-14 12:36:45.708478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:46.223 [2024-12-14 12:36:45.708762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:46.223 NewBaseBdev 00:10:46.223 [2024-12-14 12:36:45.708941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:46.223 [2024-12-14 12:36:45.708964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:46.223 [2024-12-14 12:36:45.709128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.223 [ 00:10:46.223 { 00:10:46.223 "name": "NewBaseBdev", 00:10:46.223 "aliases": [ 00:10:46.223 "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1" 00:10:46.223 ], 00:10:46.223 "product_name": "Malloc disk", 00:10:46.223 "block_size": 512, 00:10:46.223 "num_blocks": 65536, 00:10:46.223 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:46.223 "assigned_rate_limits": { 00:10:46.223 "rw_ios_per_sec": 0, 00:10:46.223 "rw_mbytes_per_sec": 0, 00:10:46.223 "r_mbytes_per_sec": 0, 00:10:46.223 "w_mbytes_per_sec": 0 00:10:46.223 }, 00:10:46.223 "claimed": true, 00:10:46.223 "claim_type": "exclusive_write", 00:10:46.223 "zoned": false, 00:10:46.223 "supported_io_types": { 00:10:46.223 "read": true, 00:10:46.223 "write": true, 00:10:46.223 "unmap": true, 00:10:46.223 "flush": true, 00:10:46.223 "reset": true, 00:10:46.223 "nvme_admin": false, 00:10:46.223 "nvme_io": false, 00:10:46.223 "nvme_io_md": false, 00:10:46.223 "write_zeroes": true, 00:10:46.223 "zcopy": true, 00:10:46.223 "get_zone_info": false, 00:10:46.223 "zone_management": false, 00:10:46.223 "zone_append": false, 00:10:46.223 "compare": false, 00:10:46.223 "compare_and_write": false, 00:10:46.223 "abort": true, 00:10:46.223 "seek_hole": false, 00:10:46.223 "seek_data": false, 00:10:46.223 "copy": true, 00:10:46.223 "nvme_iov_md": false 00:10:46.223 }, 00:10:46.223 "memory_domains": [ 00:10:46.223 { 00:10:46.223 "dma_device_id": "system", 00:10:46.223 "dma_device_type": 1 00:10:46.223 }, 00:10:46.223 { 00:10:46.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.223 "dma_device_type": 2 00:10:46.223 } 00:10:46.223 ], 00:10:46.223 "driver_specific": {} 00:10:46.223 } 00:10:46.223 ] 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.223 "name": "Existed_Raid", 00:10:46.223 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:46.223 "strip_size_kb": 64, 00:10:46.223 "state": "online", 00:10:46.223 "raid_level": "raid0", 00:10:46.223 "superblock": true, 00:10:46.223 "num_base_bdevs": 4, 00:10:46.223 "num_base_bdevs_discovered": 4, 00:10:46.223 "num_base_bdevs_operational": 4, 00:10:46.223 "base_bdevs_list": [ 00:10:46.223 { 00:10:46.223 "name": "NewBaseBdev", 00:10:46.223 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:46.223 "is_configured": true, 00:10:46.223 "data_offset": 2048, 00:10:46.223 "data_size": 63488 00:10:46.223 }, 00:10:46.223 { 00:10:46.223 "name": "BaseBdev2", 00:10:46.223 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:46.223 "is_configured": true, 00:10:46.223 "data_offset": 2048, 00:10:46.223 "data_size": 63488 00:10:46.223 }, 00:10:46.223 { 00:10:46.223 "name": "BaseBdev3", 00:10:46.223 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:46.223 "is_configured": true, 00:10:46.223 "data_offset": 2048, 00:10:46.223 "data_size": 63488 00:10:46.223 }, 00:10:46.223 { 00:10:46.223 "name": "BaseBdev4", 00:10:46.223 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:46.223 "is_configured": true, 00:10:46.223 "data_offset": 2048, 00:10:46.223 "data_size": 63488 00:10:46.223 } 00:10:46.223 ] 00:10:46.223 }' 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.223 12:36:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.482 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.742 [2024-12-14 12:36:46.219634] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.742 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.742 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:46.742 "name": "Existed_Raid", 00:10:46.742 "aliases": [ 00:10:46.742 "f9f8d2ed-ae6a-446f-831b-53622b5d68a3" 00:10:46.742 ], 00:10:46.742 "product_name": "Raid Volume", 00:10:46.742 "block_size": 512, 00:10:46.742 "num_blocks": 253952, 00:10:46.742 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:46.742 "assigned_rate_limits": { 00:10:46.742 "rw_ios_per_sec": 0, 00:10:46.742 "rw_mbytes_per_sec": 0, 00:10:46.742 "r_mbytes_per_sec": 0, 00:10:46.742 "w_mbytes_per_sec": 0 00:10:46.742 }, 00:10:46.742 "claimed": false, 00:10:46.742 "zoned": false, 00:10:46.742 "supported_io_types": { 00:10:46.742 "read": true, 00:10:46.742 "write": true, 00:10:46.742 "unmap": true, 00:10:46.742 "flush": true, 00:10:46.742 "reset": true, 00:10:46.742 "nvme_admin": false, 00:10:46.742 "nvme_io": false, 00:10:46.742 "nvme_io_md": false, 00:10:46.742 "write_zeroes": true, 00:10:46.742 "zcopy": false, 00:10:46.742 "get_zone_info": false, 00:10:46.742 "zone_management": false, 00:10:46.742 "zone_append": false, 00:10:46.742 "compare": false, 00:10:46.742 "compare_and_write": false, 00:10:46.742 "abort": false, 00:10:46.742 "seek_hole": false, 00:10:46.742 "seek_data": false, 00:10:46.742 "copy": false, 00:10:46.742 "nvme_iov_md": false 00:10:46.742 }, 00:10:46.742 "memory_domains": [ 00:10:46.742 { 00:10:46.742 "dma_device_id": "system", 00:10:46.742 "dma_device_type": 1 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.742 "dma_device_type": 2 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "dma_device_id": "system", 00:10:46.742 "dma_device_type": 1 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.742 "dma_device_type": 2 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "dma_device_id": "system", 00:10:46.742 "dma_device_type": 1 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.742 "dma_device_type": 2 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "dma_device_id": "system", 00:10:46.742 "dma_device_type": 1 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.742 "dma_device_type": 2 00:10:46.742 } 00:10:46.742 ], 00:10:46.742 "driver_specific": { 00:10:46.742 "raid": { 00:10:46.742 "uuid": "f9f8d2ed-ae6a-446f-831b-53622b5d68a3", 00:10:46.742 "strip_size_kb": 64, 00:10:46.742 "state": "online", 00:10:46.742 "raid_level": "raid0", 00:10:46.742 "superblock": true, 00:10:46.742 "num_base_bdevs": 4, 00:10:46.742 "num_base_bdevs_discovered": 4, 00:10:46.742 "num_base_bdevs_operational": 4, 00:10:46.742 "base_bdevs_list": [ 00:10:46.742 { 00:10:46.742 "name": "NewBaseBdev", 00:10:46.742 "uuid": "fb057fd9-bf50-4b6d-9f6e-ac1be4c47dc1", 00:10:46.742 "is_configured": true, 00:10:46.742 "data_offset": 2048, 00:10:46.742 "data_size": 63488 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "name": "BaseBdev2", 00:10:46.742 "uuid": "b5fbb623-a716-4d6b-b76a-551e12b31606", 00:10:46.742 "is_configured": true, 00:10:46.742 "data_offset": 2048, 00:10:46.742 "data_size": 63488 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "name": "BaseBdev3", 00:10:46.742 "uuid": "3234d6eb-058d-4f81-a992-d7e5d59d9f4d", 00:10:46.742 "is_configured": true, 00:10:46.742 "data_offset": 2048, 00:10:46.742 "data_size": 63488 00:10:46.742 }, 00:10:46.742 { 00:10:46.742 "name": "BaseBdev4", 00:10:46.742 "uuid": "bdd4d1de-f4e4-4ab2-b443-ac3f255dbf08", 00:10:46.742 "is_configured": true, 00:10:46.742 "data_offset": 2048, 00:10:46.742 "data_size": 63488 00:10:46.742 } 00:10:46.742 ] 00:10:46.742 } 00:10:46.742 } 00:10:46.742 }' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:46.743 BaseBdev2 00:10:46.743 BaseBdev3 00:10:46.743 BaseBdev4' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.743 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.003 [2024-12-14 12:36:46.534694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.003 [2024-12-14 12:36:46.534726] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.003 [2024-12-14 12:36:46.534801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.003 [2024-12-14 12:36:46.534871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.003 [2024-12-14 12:36:46.534881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71841 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71841 ']' 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71841 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71841 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71841' 00:10:47.003 killing process with pid 71841 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71841 00:10:47.003 [2024-12-14 12:36:46.575684] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:47.003 12:36:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71841 00:10:47.263 [2024-12-14 12:36:46.971283] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:48.644 12:36:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:48.644 00:10:48.644 real 0m11.360s 00:10:48.644 user 0m18.142s 00:10:48.644 sys 0m1.951s 00:10:48.645 12:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.645 12:36:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.645 ************************************ 00:10:48.645 END TEST raid_state_function_test_sb 00:10:48.645 ************************************ 00:10:48.645 12:36:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:48.645 12:36:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:48.645 12:36:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.645 12:36:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:48.645 ************************************ 00:10:48.645 START TEST raid_superblock_test 00:10:48.645 ************************************ 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72516 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72516 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72516 ']' 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:48.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:48.645 12:36:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.645 [2024-12-14 12:36:48.264672] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:48.645 [2024-12-14 12:36:48.264790] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72516 ] 00:10:48.905 [2024-12-14 12:36:48.439300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:48.905 [2024-12-14 12:36:48.549091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.164 [2024-12-14 12:36:48.747990] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.165 [2024-12-14 12:36:48.748026] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.425 malloc1 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.425 [2024-12-14 12:36:49.143766] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:49.425 [2024-12-14 12:36:49.143882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.425 [2024-12-14 12:36:49.143924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:49.425 [2024-12-14 12:36:49.143953] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.425 [2024-12-14 12:36:49.146144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.425 [2024-12-14 12:36:49.146237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:49.425 pt1 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.425 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.724 malloc2 00:10:49.724 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.724 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:49.724 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.724 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.724 [2024-12-14 12:36:49.200207] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:49.724 [2024-12-14 12:36:49.200318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.724 [2024-12-14 12:36:49.200346] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:49.724 [2024-12-14 12:36:49.200355] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.724 [2024-12-14 12:36:49.202482] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.725 [2024-12-14 12:36:49.202520] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:49.725 pt2 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.725 malloc3 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.725 [2024-12-14 12:36:49.272003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:49.725 [2024-12-14 12:36:49.272108] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.725 [2024-12-14 12:36:49.272134] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:49.725 [2024-12-14 12:36:49.272143] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.725 [2024-12-14 12:36:49.274208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.725 [2024-12-14 12:36:49.274231] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:49.725 pt3 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.725 malloc4 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.725 [2024-12-14 12:36:49.326020] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:49.725 [2024-12-14 12:36:49.326135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:49.725 [2024-12-14 12:36:49.326204] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:49.725 [2024-12-14 12:36:49.326243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:49.725 [2024-12-14 12:36:49.328631] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:49.725 [2024-12-14 12:36:49.328696] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:49.725 pt4 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.725 [2024-12-14 12:36:49.338032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:49.725 [2024-12-14 12:36:49.339847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:49.725 [2024-12-14 12:36:49.339972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:49.725 [2024-12-14 12:36:49.340071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:49.725 [2024-12-14 12:36:49.340271] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:49.725 [2024-12-14 12:36:49.340317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:49.725 [2024-12-14 12:36:49.340590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:49.725 [2024-12-14 12:36:49.340780] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:49.725 [2024-12-14 12:36:49.340824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:49.725 [2024-12-14 12:36:49.341015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.725 "name": "raid_bdev1", 00:10:49.725 "uuid": "f38aed05-a655-4105-aea4-3c07b6ba00cc", 00:10:49.725 "strip_size_kb": 64, 00:10:49.725 "state": "online", 00:10:49.725 "raid_level": "raid0", 00:10:49.725 "superblock": true, 00:10:49.725 "num_base_bdevs": 4, 00:10:49.725 "num_base_bdevs_discovered": 4, 00:10:49.725 "num_base_bdevs_operational": 4, 00:10:49.725 "base_bdevs_list": [ 00:10:49.725 { 00:10:49.725 "name": "pt1", 00:10:49.725 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.725 "is_configured": true, 00:10:49.725 "data_offset": 2048, 00:10:49.725 "data_size": 63488 00:10:49.725 }, 00:10:49.725 { 00:10:49.725 "name": "pt2", 00:10:49.725 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.725 "is_configured": true, 00:10:49.725 "data_offset": 2048, 00:10:49.725 "data_size": 63488 00:10:49.725 }, 00:10:49.725 { 00:10:49.725 "name": "pt3", 00:10:49.725 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.725 "is_configured": true, 00:10:49.725 "data_offset": 2048, 00:10:49.725 "data_size": 63488 00:10:49.725 }, 00:10:49.725 { 00:10:49.725 "name": "pt4", 00:10:49.725 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.725 "is_configured": true, 00:10:49.725 "data_offset": 2048, 00:10:49.725 "data_size": 63488 00:10:49.725 } 00:10:49.725 ] 00:10:49.725 }' 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.725 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:50.295 [2024-12-14 12:36:49.757627] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:50.295 "name": "raid_bdev1", 00:10:50.295 "aliases": [ 00:10:50.295 "f38aed05-a655-4105-aea4-3c07b6ba00cc" 00:10:50.295 ], 00:10:50.295 "product_name": "Raid Volume", 00:10:50.295 "block_size": 512, 00:10:50.295 "num_blocks": 253952, 00:10:50.295 "uuid": "f38aed05-a655-4105-aea4-3c07b6ba00cc", 00:10:50.295 "assigned_rate_limits": { 00:10:50.295 "rw_ios_per_sec": 0, 00:10:50.295 "rw_mbytes_per_sec": 0, 00:10:50.295 "r_mbytes_per_sec": 0, 00:10:50.295 "w_mbytes_per_sec": 0 00:10:50.295 }, 00:10:50.295 "claimed": false, 00:10:50.295 "zoned": false, 00:10:50.295 "supported_io_types": { 00:10:50.295 "read": true, 00:10:50.295 "write": true, 00:10:50.295 "unmap": true, 00:10:50.295 "flush": true, 00:10:50.295 "reset": true, 00:10:50.295 "nvme_admin": false, 00:10:50.295 "nvme_io": false, 00:10:50.295 "nvme_io_md": false, 00:10:50.295 "write_zeroes": true, 00:10:50.295 "zcopy": false, 00:10:50.295 "get_zone_info": false, 00:10:50.295 "zone_management": false, 00:10:50.295 "zone_append": false, 00:10:50.295 "compare": false, 00:10:50.295 "compare_and_write": false, 00:10:50.295 "abort": false, 00:10:50.295 "seek_hole": false, 00:10:50.295 "seek_data": false, 00:10:50.295 "copy": false, 00:10:50.295 "nvme_iov_md": false 00:10:50.295 }, 00:10:50.295 "memory_domains": [ 00:10:50.295 { 00:10:50.295 "dma_device_id": "system", 00:10:50.295 "dma_device_type": 1 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.295 "dma_device_type": 2 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "dma_device_id": "system", 00:10:50.295 "dma_device_type": 1 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.295 "dma_device_type": 2 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "dma_device_id": "system", 00:10:50.295 "dma_device_type": 1 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.295 "dma_device_type": 2 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "dma_device_id": "system", 00:10:50.295 "dma_device_type": 1 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.295 "dma_device_type": 2 00:10:50.295 } 00:10:50.295 ], 00:10:50.295 "driver_specific": { 00:10:50.295 "raid": { 00:10:50.295 "uuid": "f38aed05-a655-4105-aea4-3c07b6ba00cc", 00:10:50.295 "strip_size_kb": 64, 00:10:50.295 "state": "online", 00:10:50.295 "raid_level": "raid0", 00:10:50.295 "superblock": true, 00:10:50.295 "num_base_bdevs": 4, 00:10:50.295 "num_base_bdevs_discovered": 4, 00:10:50.295 "num_base_bdevs_operational": 4, 00:10:50.295 "base_bdevs_list": [ 00:10:50.295 { 00:10:50.295 "name": "pt1", 00:10:50.295 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.295 "is_configured": true, 00:10:50.295 "data_offset": 2048, 00:10:50.295 "data_size": 63488 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "name": "pt2", 00:10:50.295 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.295 "is_configured": true, 00:10:50.295 "data_offset": 2048, 00:10:50.295 "data_size": 63488 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "name": "pt3", 00:10:50.295 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.295 "is_configured": true, 00:10:50.295 "data_offset": 2048, 00:10:50.295 "data_size": 63488 00:10:50.295 }, 00:10:50.295 { 00:10:50.295 "name": "pt4", 00:10:50.295 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.295 "is_configured": true, 00:10:50.295 "data_offset": 2048, 00:10:50.295 "data_size": 63488 00:10:50.295 } 00:10:50.295 ] 00:10:50.295 } 00:10:50.295 } 00:10:50.295 }' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:50.295 pt2 00:10:50.295 pt3 00:10:50.295 pt4' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.295 12:36:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.295 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.295 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.295 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:50.295 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:50.295 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:50.295 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.295 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 [2024-12-14 12:36:50.081009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f38aed05-a655-4105-aea4-3c07b6ba00cc 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f38aed05-a655-4105-aea4-3c07b6ba00cc ']' 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 [2024-12-14 12:36:50.128636] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.556 [2024-12-14 12:36:50.128699] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:50.556 [2024-12-14 12:36:50.128781] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:50.556 [2024-12-14 12:36:50.128851] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:50.556 [2024-12-14 12:36:50.128865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.556 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.556 [2024-12-14 12:36:50.276413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:50.556 [2024-12-14 12:36:50.278319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:50.556 [2024-12-14 12:36:50.278407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:50.556 [2024-12-14 12:36:50.278468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:50.556 [2024-12-14 12:36:50.278544] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:50.556 [2024-12-14 12:36:50.278631] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:50.556 [2024-12-14 12:36:50.278686] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:50.556 [2024-12-14 12:36:50.278740] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:50.556 [2024-12-14 12:36:50.278793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:50.556 [2024-12-14 12:36:50.278832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:50.556 request: 00:10:50.556 { 00:10:50.556 "name": "raid_bdev1", 00:10:50.556 "raid_level": "raid0", 00:10:50.556 "base_bdevs": [ 00:10:50.556 "malloc1", 00:10:50.556 "malloc2", 00:10:50.556 "malloc3", 00:10:50.556 "malloc4" 00:10:50.556 ], 00:10:50.556 "strip_size_kb": 64, 00:10:50.556 "superblock": false, 00:10:50.556 "method": "bdev_raid_create", 00:10:50.556 "req_id": 1 00:10:50.557 } 00:10:50.557 Got JSON-RPC error response 00:10:50.557 response: 00:10:50.557 { 00:10:50.557 "code": -17, 00:10:50.557 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:50.557 } 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.557 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.817 [2024-12-14 12:36:50.344285] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:50.817 [2024-12-14 12:36:50.344388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:50.817 [2024-12-14 12:36:50.344420] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:50.817 [2024-12-14 12:36:50.344449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:50.817 [2024-12-14 12:36:50.346581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:50.817 [2024-12-14 12:36:50.346651] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:50.817 [2024-12-14 12:36:50.346755] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:50.817 [2024-12-14 12:36:50.346847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:50.817 pt1 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.817 "name": "raid_bdev1", 00:10:50.817 "uuid": "f38aed05-a655-4105-aea4-3c07b6ba00cc", 00:10:50.817 "strip_size_kb": 64, 00:10:50.817 "state": "configuring", 00:10:50.817 "raid_level": "raid0", 00:10:50.817 "superblock": true, 00:10:50.817 "num_base_bdevs": 4, 00:10:50.817 "num_base_bdevs_discovered": 1, 00:10:50.817 "num_base_bdevs_operational": 4, 00:10:50.817 "base_bdevs_list": [ 00:10:50.817 { 00:10:50.817 "name": "pt1", 00:10:50.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:50.817 "is_configured": true, 00:10:50.817 "data_offset": 2048, 00:10:50.817 "data_size": 63488 00:10:50.817 }, 00:10:50.817 { 00:10:50.817 "name": null, 00:10:50.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:50.817 "is_configured": false, 00:10:50.817 "data_offset": 2048, 00:10:50.817 "data_size": 63488 00:10:50.817 }, 00:10:50.817 { 00:10:50.817 "name": null, 00:10:50.817 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:50.817 "is_configured": false, 00:10:50.817 "data_offset": 2048, 00:10:50.817 "data_size": 63488 00:10:50.817 }, 00:10:50.817 { 00:10:50.817 "name": null, 00:10:50.817 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:50.817 "is_configured": false, 00:10:50.817 "data_offset": 2048, 00:10:50.817 "data_size": 63488 00:10:50.817 } 00:10:50.817 ] 00:10:50.817 }' 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.817 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.077 [2024-12-14 12:36:50.755616] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.077 [2024-12-14 12:36:50.755699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.077 [2024-12-14 12:36:50.755720] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:51.077 [2024-12-14 12:36:50.755730] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.077 [2024-12-14 12:36:50.756194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.077 [2024-12-14 12:36:50.756228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.077 [2024-12-14 12:36:50.756313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.077 [2024-12-14 12:36:50.756337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.077 pt2 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.077 [2024-12-14 12:36:50.767628] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.077 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.337 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.337 "name": "raid_bdev1", 00:10:51.337 "uuid": "f38aed05-a655-4105-aea4-3c07b6ba00cc", 00:10:51.337 "strip_size_kb": 64, 00:10:51.337 "state": "configuring", 00:10:51.337 "raid_level": "raid0", 00:10:51.337 "superblock": true, 00:10:51.337 "num_base_bdevs": 4, 00:10:51.337 "num_base_bdevs_discovered": 1, 00:10:51.337 "num_base_bdevs_operational": 4, 00:10:51.337 "base_bdevs_list": [ 00:10:51.337 { 00:10:51.337 "name": "pt1", 00:10:51.337 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.337 "is_configured": true, 00:10:51.337 "data_offset": 2048, 00:10:51.337 "data_size": 63488 00:10:51.337 }, 00:10:51.337 { 00:10:51.337 "name": null, 00:10:51.337 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.337 "is_configured": false, 00:10:51.337 "data_offset": 0, 00:10:51.337 "data_size": 63488 00:10:51.337 }, 00:10:51.337 { 00:10:51.337 "name": null, 00:10:51.337 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.337 "is_configured": false, 00:10:51.337 "data_offset": 2048, 00:10:51.337 "data_size": 63488 00:10:51.337 }, 00:10:51.337 { 00:10:51.337 "name": null, 00:10:51.337 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.337 "is_configured": false, 00:10:51.337 "data_offset": 2048, 00:10:51.337 "data_size": 63488 00:10:51.337 } 00:10:51.337 ] 00:10:51.337 }' 00:10:51.337 12:36:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.337 12:36:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.597 [2024-12-14 12:36:51.210828] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:51.597 [2024-12-14 12:36:51.210938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.597 [2024-12-14 12:36:51.210986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:51.597 [2024-12-14 12:36:51.211022] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.597 [2024-12-14 12:36:51.211512] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.597 [2024-12-14 12:36:51.211576] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:51.597 [2024-12-14 12:36:51.211696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:51.597 [2024-12-14 12:36:51.211743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:51.597 pt2 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.597 [2024-12-14 12:36:51.222769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:51.597 [2024-12-14 12:36:51.222815] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.597 [2024-12-14 12:36:51.222833] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:51.597 [2024-12-14 12:36:51.222841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.597 [2024-12-14 12:36:51.223225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.597 [2024-12-14 12:36:51.223279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:51.597 [2024-12-14 12:36:51.223345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:51.597 [2024-12-14 12:36:51.223371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:51.597 pt3 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.597 [2024-12-14 12:36:51.234729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:51.597 [2024-12-14 12:36:51.234819] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.597 [2024-12-14 12:36:51.234838] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:51.597 [2024-12-14 12:36:51.234846] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.597 [2024-12-14 12:36:51.235203] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.597 [2024-12-14 12:36:51.235220] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:51.597 [2024-12-14 12:36:51.235279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:51.597 [2024-12-14 12:36:51.235300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:51.597 [2024-12-14 12:36:51.235426] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:51.597 [2024-12-14 12:36:51.235434] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:51.597 [2024-12-14 12:36:51.235667] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:51.597 [2024-12-14 12:36:51.235811] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:51.597 [2024-12-14 12:36:51.235830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:51.597 [2024-12-14 12:36:51.235964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.597 pt4 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.597 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.597 "name": "raid_bdev1", 00:10:51.597 "uuid": "f38aed05-a655-4105-aea4-3c07b6ba00cc", 00:10:51.597 "strip_size_kb": 64, 00:10:51.597 "state": "online", 00:10:51.597 "raid_level": "raid0", 00:10:51.597 "superblock": true, 00:10:51.597 "num_base_bdevs": 4, 00:10:51.597 "num_base_bdevs_discovered": 4, 00:10:51.597 "num_base_bdevs_operational": 4, 00:10:51.597 "base_bdevs_list": [ 00:10:51.597 { 00:10:51.597 "name": "pt1", 00:10:51.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:51.597 "is_configured": true, 00:10:51.597 "data_offset": 2048, 00:10:51.597 "data_size": 63488 00:10:51.597 }, 00:10:51.597 { 00:10:51.597 "name": "pt2", 00:10:51.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:51.597 "is_configured": true, 00:10:51.597 "data_offset": 2048, 00:10:51.597 "data_size": 63488 00:10:51.597 }, 00:10:51.597 { 00:10:51.597 "name": "pt3", 00:10:51.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:51.597 "is_configured": true, 00:10:51.597 "data_offset": 2048, 00:10:51.598 "data_size": 63488 00:10:51.598 }, 00:10:51.598 { 00:10:51.598 "name": "pt4", 00:10:51.598 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:51.598 "is_configured": true, 00:10:51.598 "data_offset": 2048, 00:10:51.598 "data_size": 63488 00:10:51.598 } 00:10:51.598 ] 00:10:51.598 }' 00:10:51.598 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.598 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.166 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:52.166 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:52.166 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:52.166 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:52.166 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.167 [2024-12-14 12:36:51.646480] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:52.167 "name": "raid_bdev1", 00:10:52.167 "aliases": [ 00:10:52.167 "f38aed05-a655-4105-aea4-3c07b6ba00cc" 00:10:52.167 ], 00:10:52.167 "product_name": "Raid Volume", 00:10:52.167 "block_size": 512, 00:10:52.167 "num_blocks": 253952, 00:10:52.167 "uuid": "f38aed05-a655-4105-aea4-3c07b6ba00cc", 00:10:52.167 "assigned_rate_limits": { 00:10:52.167 "rw_ios_per_sec": 0, 00:10:52.167 "rw_mbytes_per_sec": 0, 00:10:52.167 "r_mbytes_per_sec": 0, 00:10:52.167 "w_mbytes_per_sec": 0 00:10:52.167 }, 00:10:52.167 "claimed": false, 00:10:52.167 "zoned": false, 00:10:52.167 "supported_io_types": { 00:10:52.167 "read": true, 00:10:52.167 "write": true, 00:10:52.167 "unmap": true, 00:10:52.167 "flush": true, 00:10:52.167 "reset": true, 00:10:52.167 "nvme_admin": false, 00:10:52.167 "nvme_io": false, 00:10:52.167 "nvme_io_md": false, 00:10:52.167 "write_zeroes": true, 00:10:52.167 "zcopy": false, 00:10:52.167 "get_zone_info": false, 00:10:52.167 "zone_management": false, 00:10:52.167 "zone_append": false, 00:10:52.167 "compare": false, 00:10:52.167 "compare_and_write": false, 00:10:52.167 "abort": false, 00:10:52.167 "seek_hole": false, 00:10:52.167 "seek_data": false, 00:10:52.167 "copy": false, 00:10:52.167 "nvme_iov_md": false 00:10:52.167 }, 00:10:52.167 "memory_domains": [ 00:10:52.167 { 00:10:52.167 "dma_device_id": "system", 00:10:52.167 "dma_device_type": 1 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.167 "dma_device_type": 2 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "dma_device_id": "system", 00:10:52.167 "dma_device_type": 1 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.167 "dma_device_type": 2 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "dma_device_id": "system", 00:10:52.167 "dma_device_type": 1 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.167 "dma_device_type": 2 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "dma_device_id": "system", 00:10:52.167 "dma_device_type": 1 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.167 "dma_device_type": 2 00:10:52.167 } 00:10:52.167 ], 00:10:52.167 "driver_specific": { 00:10:52.167 "raid": { 00:10:52.167 "uuid": "f38aed05-a655-4105-aea4-3c07b6ba00cc", 00:10:52.167 "strip_size_kb": 64, 00:10:52.167 "state": "online", 00:10:52.167 "raid_level": "raid0", 00:10:52.167 "superblock": true, 00:10:52.167 "num_base_bdevs": 4, 00:10:52.167 "num_base_bdevs_discovered": 4, 00:10:52.167 "num_base_bdevs_operational": 4, 00:10:52.167 "base_bdevs_list": [ 00:10:52.167 { 00:10:52.167 "name": "pt1", 00:10:52.167 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:52.167 "is_configured": true, 00:10:52.167 "data_offset": 2048, 00:10:52.167 "data_size": 63488 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "name": "pt2", 00:10:52.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:52.167 "is_configured": true, 00:10:52.167 "data_offset": 2048, 00:10:52.167 "data_size": 63488 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "name": "pt3", 00:10:52.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:52.167 "is_configured": true, 00:10:52.167 "data_offset": 2048, 00:10:52.167 "data_size": 63488 00:10:52.167 }, 00:10:52.167 { 00:10:52.167 "name": "pt4", 00:10:52.167 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:52.167 "is_configured": true, 00:10:52.167 "data_offset": 2048, 00:10:52.167 "data_size": 63488 00:10:52.167 } 00:10:52.167 ] 00:10:52.167 } 00:10:52.167 } 00:10:52.167 }' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:52.167 pt2 00:10:52.167 pt3 00:10:52.167 pt4' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.167 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.428 [2024-12-14 12:36:51.965850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f38aed05-a655-4105-aea4-3c07b6ba00cc '!=' f38aed05-a655-4105-aea4-3c07b6ba00cc ']' 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72516 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72516 ']' 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72516 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:52.428 12:36:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:52.428 12:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72516 00:10:52.428 killing process with pid 72516 00:10:52.428 12:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:52.428 12:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:52.428 12:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72516' 00:10:52.428 12:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72516 00:10:52.428 [2024-12-14 12:36:52.031749] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:52.428 [2024-12-14 12:36:52.031843] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:52.428 12:36:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72516 00:10:52.428 [2024-12-14 12:36:52.031916] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:52.428 [2024-12-14 12:36:52.031926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:52.997 [2024-12-14 12:36:52.428258] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.936 12:36:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:53.936 00:10:53.936 real 0m5.358s 00:10:53.936 user 0m7.642s 00:10:53.936 sys 0m0.906s 00:10:53.937 12:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.937 12:36:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.937 ************************************ 00:10:53.937 END TEST raid_superblock_test 00:10:53.937 ************************************ 00:10:53.937 12:36:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:53.937 12:36:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:53.937 12:36:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.937 12:36:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:53.937 ************************************ 00:10:53.937 START TEST raid_read_error_test 00:10:53.937 ************************************ 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.w04zBGyqP7 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72775 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72775 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72775 ']' 00:10:53.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.937 12:36:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.196 [2024-12-14 12:36:53.704739] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:54.196 [2024-12-14 12:36:53.704853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72775 ] 00:10:54.196 [2024-12-14 12:36:53.881917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:54.456 [2024-12-14 12:36:53.995995] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.715 [2024-12-14 12:36:54.196371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.715 [2024-12-14 12:36:54.196432] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.975 BaseBdev1_malloc 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.975 true 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.975 [2024-12-14 12:36:54.598623] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:54.975 [2024-12-14 12:36:54.598723] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.975 [2024-12-14 12:36:54.598747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:54.975 [2024-12-14 12:36:54.598758] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.975 [2024-12-14 12:36:54.600989] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.975 [2024-12-14 12:36:54.601032] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:54.975 BaseBdev1 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.975 BaseBdev2_malloc 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.975 true 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.975 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.976 [2024-12-14 12:36:54.663920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:54.976 [2024-12-14 12:36:54.663974] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.976 [2024-12-14 12:36:54.663989] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:54.976 [2024-12-14 12:36:54.663999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.976 [2024-12-14 12:36:54.666046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.976 [2024-12-14 12:36:54.666143] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:54.976 BaseBdev2 00:10:54.976 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.976 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:54.976 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:54.976 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.976 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.236 BaseBdev3_malloc 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.236 true 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.236 [2024-12-14 12:36:54.751725] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:55.236 [2024-12-14 12:36:54.751835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.236 [2024-12-14 12:36:54.751856] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:55.236 [2024-12-14 12:36:54.751867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.236 [2024-12-14 12:36:54.753885] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.236 [2024-12-14 12:36:54.753925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:55.236 BaseBdev3 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.236 BaseBdev4_malloc 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.236 true 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.236 [2024-12-14 12:36:54.820071] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:55.236 [2024-12-14 12:36:54.820122] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.236 [2024-12-14 12:36:54.820139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:55.236 [2024-12-14 12:36:54.820168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.236 [2024-12-14 12:36:54.822244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.236 [2024-12-14 12:36:54.822285] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:55.236 BaseBdev4 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.236 [2024-12-14 12:36:54.832108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:55.236 [2024-12-14 12:36:54.833824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:55.236 [2024-12-14 12:36:54.833901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:55.236 [2024-12-14 12:36:54.833963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:55.236 [2024-12-14 12:36:54.834221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:55.236 [2024-12-14 12:36:54.834241] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:55.236 [2024-12-14 12:36:54.834497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:55.236 [2024-12-14 12:36:54.834679] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:55.236 [2024-12-14 12:36:54.834691] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:55.236 [2024-12-14 12:36:54.834875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.236 "name": "raid_bdev1", 00:10:55.236 "uuid": "c058af8e-4ed9-41d8-b5dd-21ad1ca9db4e", 00:10:55.236 "strip_size_kb": 64, 00:10:55.236 "state": "online", 00:10:55.236 "raid_level": "raid0", 00:10:55.236 "superblock": true, 00:10:55.236 "num_base_bdevs": 4, 00:10:55.236 "num_base_bdevs_discovered": 4, 00:10:55.236 "num_base_bdevs_operational": 4, 00:10:55.236 "base_bdevs_list": [ 00:10:55.236 { 00:10:55.236 "name": "BaseBdev1", 00:10:55.236 "uuid": "7e43bf70-fc93-55e6-bd29-1e5f922d36d7", 00:10:55.236 "is_configured": true, 00:10:55.236 "data_offset": 2048, 00:10:55.236 "data_size": 63488 00:10:55.236 }, 00:10:55.236 { 00:10:55.236 "name": "BaseBdev2", 00:10:55.236 "uuid": "9f1c21d3-ef32-5195-b278-8d1cf8a08fcf", 00:10:55.236 "is_configured": true, 00:10:55.236 "data_offset": 2048, 00:10:55.236 "data_size": 63488 00:10:55.236 }, 00:10:55.236 { 00:10:55.236 "name": "BaseBdev3", 00:10:55.236 "uuid": "f59b795b-405f-59a3-acbb-f6b72f0f0f6e", 00:10:55.236 "is_configured": true, 00:10:55.236 "data_offset": 2048, 00:10:55.236 "data_size": 63488 00:10:55.236 }, 00:10:55.236 { 00:10:55.236 "name": "BaseBdev4", 00:10:55.236 "uuid": "5057fe2c-f03f-5932-8f34-660c209df449", 00:10:55.236 "is_configured": true, 00:10:55.236 "data_offset": 2048, 00:10:55.236 "data_size": 63488 00:10:55.236 } 00:10:55.236 ] 00:10:55.236 }' 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.236 12:36:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.805 12:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:55.805 12:36:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:55.805 [2024-12-14 12:36:55.400390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.744 "name": "raid_bdev1", 00:10:56.744 "uuid": "c058af8e-4ed9-41d8-b5dd-21ad1ca9db4e", 00:10:56.744 "strip_size_kb": 64, 00:10:56.744 "state": "online", 00:10:56.744 "raid_level": "raid0", 00:10:56.744 "superblock": true, 00:10:56.744 "num_base_bdevs": 4, 00:10:56.744 "num_base_bdevs_discovered": 4, 00:10:56.744 "num_base_bdevs_operational": 4, 00:10:56.744 "base_bdevs_list": [ 00:10:56.744 { 00:10:56.744 "name": "BaseBdev1", 00:10:56.744 "uuid": "7e43bf70-fc93-55e6-bd29-1e5f922d36d7", 00:10:56.744 "is_configured": true, 00:10:56.744 "data_offset": 2048, 00:10:56.744 "data_size": 63488 00:10:56.744 }, 00:10:56.744 { 00:10:56.744 "name": "BaseBdev2", 00:10:56.744 "uuid": "9f1c21d3-ef32-5195-b278-8d1cf8a08fcf", 00:10:56.744 "is_configured": true, 00:10:56.744 "data_offset": 2048, 00:10:56.744 "data_size": 63488 00:10:56.744 }, 00:10:56.744 { 00:10:56.744 "name": "BaseBdev3", 00:10:56.744 "uuid": "f59b795b-405f-59a3-acbb-f6b72f0f0f6e", 00:10:56.744 "is_configured": true, 00:10:56.744 "data_offset": 2048, 00:10:56.744 "data_size": 63488 00:10:56.744 }, 00:10:56.744 { 00:10:56.744 "name": "BaseBdev4", 00:10:56.744 "uuid": "5057fe2c-f03f-5932-8f34-660c209df449", 00:10:56.744 "is_configured": true, 00:10:56.744 "data_offset": 2048, 00:10:56.744 "data_size": 63488 00:10:56.744 } 00:10:56.744 ] 00:10:56.744 }' 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.744 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.313 [2024-12-14 12:36:56.812990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.313 [2024-12-14 12:36:56.813025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.313 [2024-12-14 12:36:56.815803] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.313 [2024-12-14 12:36:56.815895] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.313 [2024-12-14 12:36:56.815966] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.313 [2024-12-14 12:36:56.816013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:57.313 { 00:10:57.313 "results": [ 00:10:57.313 { 00:10:57.313 "job": "raid_bdev1", 00:10:57.313 "core_mask": "0x1", 00:10:57.313 "workload": "randrw", 00:10:57.313 "percentage": 50, 00:10:57.313 "status": "finished", 00:10:57.313 "queue_depth": 1, 00:10:57.313 "io_size": 131072, 00:10:57.313 "runtime": 1.413516, 00:10:57.313 "iops": 15377.25784497664, 00:10:57.313 "mibps": 1922.15723062208, 00:10:57.313 "io_failed": 1, 00:10:57.313 "io_timeout": 0, 00:10:57.313 "avg_latency_us": 90.28332437015509, 00:10:57.313 "min_latency_us": 26.829694323144103, 00:10:57.313 "max_latency_us": 1452.380786026201 00:10:57.313 } 00:10:57.313 ], 00:10:57.313 "core_count": 1 00:10:57.313 } 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72775 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72775 ']' 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72775 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72775 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72775' 00:10:57.313 killing process with pid 72775 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72775 00:10:57.313 [2024-12-14 12:36:56.861107] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:57.313 12:36:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72775 00:10:57.573 [2024-12-14 12:36:57.186027] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.w04zBGyqP7 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:58.972 ************************************ 00:10:58.972 END TEST raid_read_error_test 00:10:58.972 ************************************ 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:58.972 00:10:58.972 real 0m4.775s 00:10:58.972 user 0m5.682s 00:10:58.972 sys 0m0.557s 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.972 12:36:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.972 12:36:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:58.972 12:36:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:58.972 12:36:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.972 12:36:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:58.972 ************************************ 00:10:58.972 START TEST raid_write_error_test 00:10:58.972 ************************************ 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XIyh7m2j7H 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72922 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72922 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72922 ']' 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:58.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:58.972 12:36:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.972 [2024-12-14 12:36:58.550357] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:58.972 [2024-12-14 12:36:58.550492] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72922 ] 00:10:59.231 [2024-12-14 12:36:58.720806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.231 [2024-12-14 12:36:58.832868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.490 [2024-12-14 12:36:59.030668] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.490 [2024-12-14 12:36:59.030729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.750 BaseBdev1_malloc 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.750 true 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.750 [2024-12-14 12:36:59.428958] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:59.750 [2024-12-14 12:36:59.429013] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.750 [2024-12-14 12:36:59.429032] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:59.750 [2024-12-14 12:36:59.429055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.750 [2024-12-14 12:36:59.431140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.750 [2024-12-14 12:36:59.431250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:59.750 BaseBdev1 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.750 BaseBdev2_malloc 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.750 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.009 true 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.009 [2024-12-14 12:36:59.497158] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:00.009 [2024-12-14 12:36:59.497253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.009 [2024-12-14 12:36:59.497274] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:00.009 [2024-12-14 12:36:59.497284] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.009 [2024-12-14 12:36:59.499404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.009 [2024-12-14 12:36:59.499442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:00.009 BaseBdev2 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.009 BaseBdev3_malloc 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:00.009 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.010 true 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.010 [2024-12-14 12:36:59.577224] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:00.010 [2024-12-14 12:36:59.577319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.010 [2024-12-14 12:36:59.577339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:00.010 [2024-12-14 12:36:59.577349] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.010 [2024-12-14 12:36:59.579421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.010 [2024-12-14 12:36:59.579461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:00.010 BaseBdev3 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.010 BaseBdev4_malloc 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.010 true 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.010 [2024-12-14 12:36:59.643462] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:00.010 [2024-12-14 12:36:59.643511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.010 [2024-12-14 12:36:59.643527] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:00.010 [2024-12-14 12:36:59.643537] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.010 [2024-12-14 12:36:59.645485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.010 [2024-12-14 12:36:59.645525] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:00.010 BaseBdev4 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.010 [2024-12-14 12:36:59.655506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.010 [2024-12-14 12:36:59.657219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:00.010 [2024-12-14 12:36:59.657342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:00.010 [2024-12-14 12:36:59.657408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:00.010 [2024-12-14 12:36:59.657618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:00.010 [2024-12-14 12:36:59.657635] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:00.010 [2024-12-14 12:36:59.657852] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:00.010 [2024-12-14 12:36:59.658000] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:00.010 [2024-12-14 12:36:59.658011] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:00.010 [2024-12-14 12:36:59.658175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.010 "name": "raid_bdev1", 00:11:00.010 "uuid": "5105079f-f618-490d-baa7-180bb8ae7de5", 00:11:00.010 "strip_size_kb": 64, 00:11:00.010 "state": "online", 00:11:00.010 "raid_level": "raid0", 00:11:00.010 "superblock": true, 00:11:00.010 "num_base_bdevs": 4, 00:11:00.010 "num_base_bdevs_discovered": 4, 00:11:00.010 "num_base_bdevs_operational": 4, 00:11:00.010 "base_bdevs_list": [ 00:11:00.010 { 00:11:00.010 "name": "BaseBdev1", 00:11:00.010 "uuid": "e740ca64-409c-5bdd-8293-d25d9b0e1d59", 00:11:00.010 "is_configured": true, 00:11:00.010 "data_offset": 2048, 00:11:00.010 "data_size": 63488 00:11:00.010 }, 00:11:00.010 { 00:11:00.010 "name": "BaseBdev2", 00:11:00.010 "uuid": "208f3907-23db-5843-829a-f59d9dfa3bb3", 00:11:00.010 "is_configured": true, 00:11:00.010 "data_offset": 2048, 00:11:00.010 "data_size": 63488 00:11:00.010 }, 00:11:00.010 { 00:11:00.010 "name": "BaseBdev3", 00:11:00.010 "uuid": "f56fc738-72fb-5c1a-a101-c41f4dff4b31", 00:11:00.010 "is_configured": true, 00:11:00.010 "data_offset": 2048, 00:11:00.010 "data_size": 63488 00:11:00.010 }, 00:11:00.010 { 00:11:00.010 "name": "BaseBdev4", 00:11:00.010 "uuid": "30608fc4-f6cc-57fc-8b36-4574cb929981", 00:11:00.010 "is_configured": true, 00:11:00.010 "data_offset": 2048, 00:11:00.010 "data_size": 63488 00:11:00.010 } 00:11:00.010 ] 00:11:00.010 }' 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.010 12:36:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.579 12:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:00.579 12:37:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:00.579 [2024-12-14 12:37:00.219687] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.518 "name": "raid_bdev1", 00:11:01.518 "uuid": "5105079f-f618-490d-baa7-180bb8ae7de5", 00:11:01.518 "strip_size_kb": 64, 00:11:01.518 "state": "online", 00:11:01.518 "raid_level": "raid0", 00:11:01.518 "superblock": true, 00:11:01.518 "num_base_bdevs": 4, 00:11:01.518 "num_base_bdevs_discovered": 4, 00:11:01.518 "num_base_bdevs_operational": 4, 00:11:01.518 "base_bdevs_list": [ 00:11:01.518 { 00:11:01.518 "name": "BaseBdev1", 00:11:01.518 "uuid": "e740ca64-409c-5bdd-8293-d25d9b0e1d59", 00:11:01.518 "is_configured": true, 00:11:01.518 "data_offset": 2048, 00:11:01.518 "data_size": 63488 00:11:01.518 }, 00:11:01.518 { 00:11:01.518 "name": "BaseBdev2", 00:11:01.518 "uuid": "208f3907-23db-5843-829a-f59d9dfa3bb3", 00:11:01.518 "is_configured": true, 00:11:01.518 "data_offset": 2048, 00:11:01.518 "data_size": 63488 00:11:01.518 }, 00:11:01.518 { 00:11:01.518 "name": "BaseBdev3", 00:11:01.518 "uuid": "f56fc738-72fb-5c1a-a101-c41f4dff4b31", 00:11:01.518 "is_configured": true, 00:11:01.518 "data_offset": 2048, 00:11:01.518 "data_size": 63488 00:11:01.518 }, 00:11:01.518 { 00:11:01.518 "name": "BaseBdev4", 00:11:01.518 "uuid": "30608fc4-f6cc-57fc-8b36-4574cb929981", 00:11:01.518 "is_configured": true, 00:11:01.518 "data_offset": 2048, 00:11:01.518 "data_size": 63488 00:11:01.518 } 00:11:01.518 ] 00:11:01.518 }' 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.518 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.087 [2024-12-14 12:37:01.607673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.087 [2024-12-14 12:37:01.607711] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.087 [2024-12-14 12:37:01.610360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.087 [2024-12-14 12:37:01.610491] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:02.087 [2024-12-14 12:37:01.610548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.087 [2024-12-14 12:37:01.610561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:02.087 { 00:11:02.087 "results": [ 00:11:02.087 { 00:11:02.087 "job": "raid_bdev1", 00:11:02.087 "core_mask": "0x1", 00:11:02.087 "workload": "randrw", 00:11:02.087 "percentage": 50, 00:11:02.087 "status": "finished", 00:11:02.087 "queue_depth": 1, 00:11:02.087 "io_size": 131072, 00:11:02.087 "runtime": 1.388953, 00:11:02.087 "iops": 15342.491790578946, 00:11:02.087 "mibps": 1917.8114738223683, 00:11:02.087 "io_failed": 1, 00:11:02.087 "io_timeout": 0, 00:11:02.087 "avg_latency_us": 90.40866403741309, 00:11:02.087 "min_latency_us": 26.270742358078603, 00:11:02.087 "max_latency_us": 1409.4532751091704 00:11:02.087 } 00:11:02.087 ], 00:11:02.087 "core_count": 1 00:11:02.087 } 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72922 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72922 ']' 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72922 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72922 00:11:02.087 killing process with pid 72922 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72922' 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72922 00:11:02.087 [2024-12-14 12:37:01.638679] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:02.087 12:37:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72922 00:11:02.347 [2024-12-14 12:37:01.957783] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XIyh7m2j7H 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:03.727 ************************************ 00:11:03.727 END TEST raid_write_error_test 00:11:03.727 ************************************ 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:03.727 00:11:03.727 real 0m4.697s 00:11:03.727 user 0m5.554s 00:11:03.727 sys 0m0.568s 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.727 12:37:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.727 12:37:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:03.727 12:37:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:03.727 12:37:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:03.727 12:37:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.727 12:37:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:03.727 ************************************ 00:11:03.727 START TEST raid_state_function_test 00:11:03.727 ************************************ 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:03.727 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73060 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73060' 00:11:03.728 Process raid pid: 73060 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73060 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73060 ']' 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:03.728 12:37:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.728 [2024-12-14 12:37:03.308892] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:03.728 [2024-12-14 12:37:03.309109] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.986 [2024-12-14 12:37:03.482337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.986 [2024-12-14 12:37:03.601960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:04.246 [2024-12-14 12:37:03.803891] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.246 [2024-12-14 12:37:03.804017] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.506 [2024-12-14 12:37:04.151622] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:04.506 [2024-12-14 12:37:04.151684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:04.506 [2024-12-14 12:37:04.151695] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.506 [2024-12-14 12:37:04.151707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.506 [2024-12-14 12:37:04.151714] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.506 [2024-12-14 12:37:04.151724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.506 [2024-12-14 12:37:04.151731] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:04.506 [2024-12-14 12:37:04.151741] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.506 "name": "Existed_Raid", 00:11:04.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.506 "strip_size_kb": 64, 00:11:04.506 "state": "configuring", 00:11:04.506 "raid_level": "concat", 00:11:04.506 "superblock": false, 00:11:04.506 "num_base_bdevs": 4, 00:11:04.506 "num_base_bdevs_discovered": 0, 00:11:04.506 "num_base_bdevs_operational": 4, 00:11:04.506 "base_bdevs_list": [ 00:11:04.506 { 00:11:04.506 "name": "BaseBdev1", 00:11:04.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.506 "is_configured": false, 00:11:04.506 "data_offset": 0, 00:11:04.506 "data_size": 0 00:11:04.506 }, 00:11:04.506 { 00:11:04.506 "name": "BaseBdev2", 00:11:04.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.506 "is_configured": false, 00:11:04.506 "data_offset": 0, 00:11:04.506 "data_size": 0 00:11:04.506 }, 00:11:04.506 { 00:11:04.506 "name": "BaseBdev3", 00:11:04.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.506 "is_configured": false, 00:11:04.506 "data_offset": 0, 00:11:04.506 "data_size": 0 00:11:04.506 }, 00:11:04.506 { 00:11:04.506 "name": "BaseBdev4", 00:11:04.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.506 "is_configured": false, 00:11:04.506 "data_offset": 0, 00:11:04.506 "data_size": 0 00:11:04.506 } 00:11:04.506 ] 00:11:04.506 }' 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.506 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.075 [2024-12-14 12:37:04.570834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.075 [2024-12-14 12:37:04.570930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.075 [2024-12-14 12:37:04.582824] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.075 [2024-12-14 12:37:04.582927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.075 [2024-12-14 12:37:04.582966] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.075 [2024-12-14 12:37:04.582994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.075 [2024-12-14 12:37:04.583025] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.075 [2024-12-14 12:37:04.583073] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.075 [2024-12-14 12:37:04.583102] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.075 [2024-12-14 12:37:04.583129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.075 [2024-12-14 12:37:04.633032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.075 BaseBdev1 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.075 [ 00:11:05.075 { 00:11:05.075 "name": "BaseBdev1", 00:11:05.075 "aliases": [ 00:11:05.075 "fc5c8108-be35-4682-b373-be551b481cb0" 00:11:05.075 ], 00:11:05.075 "product_name": "Malloc disk", 00:11:05.075 "block_size": 512, 00:11:05.075 "num_blocks": 65536, 00:11:05.075 "uuid": "fc5c8108-be35-4682-b373-be551b481cb0", 00:11:05.075 "assigned_rate_limits": { 00:11:05.075 "rw_ios_per_sec": 0, 00:11:05.075 "rw_mbytes_per_sec": 0, 00:11:05.075 "r_mbytes_per_sec": 0, 00:11:05.075 "w_mbytes_per_sec": 0 00:11:05.075 }, 00:11:05.075 "claimed": true, 00:11:05.075 "claim_type": "exclusive_write", 00:11:05.075 "zoned": false, 00:11:05.075 "supported_io_types": { 00:11:05.075 "read": true, 00:11:05.075 "write": true, 00:11:05.075 "unmap": true, 00:11:05.075 "flush": true, 00:11:05.075 "reset": true, 00:11:05.075 "nvme_admin": false, 00:11:05.075 "nvme_io": false, 00:11:05.075 "nvme_io_md": false, 00:11:05.075 "write_zeroes": true, 00:11:05.075 "zcopy": true, 00:11:05.075 "get_zone_info": false, 00:11:05.075 "zone_management": false, 00:11:05.075 "zone_append": false, 00:11:05.075 "compare": false, 00:11:05.075 "compare_and_write": false, 00:11:05.075 "abort": true, 00:11:05.075 "seek_hole": false, 00:11:05.075 "seek_data": false, 00:11:05.075 "copy": true, 00:11:05.075 "nvme_iov_md": false 00:11:05.075 }, 00:11:05.075 "memory_domains": [ 00:11:05.075 { 00:11:05.075 "dma_device_id": "system", 00:11:05.075 "dma_device_type": 1 00:11:05.075 }, 00:11:05.075 { 00:11:05.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.075 "dma_device_type": 2 00:11:05.075 } 00:11:05.075 ], 00:11:05.075 "driver_specific": {} 00:11:05.075 } 00:11:05.075 ] 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.075 "name": "Existed_Raid", 00:11:05.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.075 "strip_size_kb": 64, 00:11:05.075 "state": "configuring", 00:11:05.075 "raid_level": "concat", 00:11:05.075 "superblock": false, 00:11:05.075 "num_base_bdevs": 4, 00:11:05.075 "num_base_bdevs_discovered": 1, 00:11:05.075 "num_base_bdevs_operational": 4, 00:11:05.075 "base_bdevs_list": [ 00:11:05.075 { 00:11:05.075 "name": "BaseBdev1", 00:11:05.075 "uuid": "fc5c8108-be35-4682-b373-be551b481cb0", 00:11:05.075 "is_configured": true, 00:11:05.075 "data_offset": 0, 00:11:05.075 "data_size": 65536 00:11:05.075 }, 00:11:05.075 { 00:11:05.075 "name": "BaseBdev2", 00:11:05.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.075 "is_configured": false, 00:11:05.075 "data_offset": 0, 00:11:05.075 "data_size": 0 00:11:05.075 }, 00:11:05.075 { 00:11:05.075 "name": "BaseBdev3", 00:11:05.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.075 "is_configured": false, 00:11:05.075 "data_offset": 0, 00:11:05.075 "data_size": 0 00:11:05.075 }, 00:11:05.075 { 00:11:05.075 "name": "BaseBdev4", 00:11:05.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.075 "is_configured": false, 00:11:05.075 "data_offset": 0, 00:11:05.075 "data_size": 0 00:11:05.075 } 00:11:05.075 ] 00:11:05.075 }' 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.075 12:37:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.645 [2024-12-14 12:37:05.140219] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:05.645 [2024-12-14 12:37:05.140275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.645 [2024-12-14 12:37:05.152265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:05.645 [2024-12-14 12:37:05.154101] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:05.645 [2024-12-14 12:37:05.154142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:05.645 [2024-12-14 12:37:05.154152] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:05.645 [2024-12-14 12:37:05.154163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:05.645 [2024-12-14 12:37:05.154177] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:05.645 [2024-12-14 12:37:05.154186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.645 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.645 "name": "Existed_Raid", 00:11:05.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.645 "strip_size_kb": 64, 00:11:05.645 "state": "configuring", 00:11:05.645 "raid_level": "concat", 00:11:05.645 "superblock": false, 00:11:05.645 "num_base_bdevs": 4, 00:11:05.645 "num_base_bdevs_discovered": 1, 00:11:05.645 "num_base_bdevs_operational": 4, 00:11:05.645 "base_bdevs_list": [ 00:11:05.645 { 00:11:05.645 "name": "BaseBdev1", 00:11:05.645 "uuid": "fc5c8108-be35-4682-b373-be551b481cb0", 00:11:05.645 "is_configured": true, 00:11:05.645 "data_offset": 0, 00:11:05.645 "data_size": 65536 00:11:05.645 }, 00:11:05.645 { 00:11:05.645 "name": "BaseBdev2", 00:11:05.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.645 "is_configured": false, 00:11:05.645 "data_offset": 0, 00:11:05.645 "data_size": 0 00:11:05.645 }, 00:11:05.645 { 00:11:05.645 "name": "BaseBdev3", 00:11:05.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.645 "is_configured": false, 00:11:05.645 "data_offset": 0, 00:11:05.645 "data_size": 0 00:11:05.645 }, 00:11:05.646 { 00:11:05.646 "name": "BaseBdev4", 00:11:05.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.646 "is_configured": false, 00:11:05.646 "data_offset": 0, 00:11:05.646 "data_size": 0 00:11:05.646 } 00:11:05.646 ] 00:11:05.646 }' 00:11:05.646 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.646 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.905 [2024-12-14 12:37:05.635108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.905 BaseBdev2 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.905 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.165 [ 00:11:06.165 { 00:11:06.165 "name": "BaseBdev2", 00:11:06.165 "aliases": [ 00:11:06.165 "d21755e4-c70d-4f8e-83d5-b38abc7513fb" 00:11:06.165 ], 00:11:06.165 "product_name": "Malloc disk", 00:11:06.165 "block_size": 512, 00:11:06.165 "num_blocks": 65536, 00:11:06.165 "uuid": "d21755e4-c70d-4f8e-83d5-b38abc7513fb", 00:11:06.165 "assigned_rate_limits": { 00:11:06.165 "rw_ios_per_sec": 0, 00:11:06.165 "rw_mbytes_per_sec": 0, 00:11:06.165 "r_mbytes_per_sec": 0, 00:11:06.165 "w_mbytes_per_sec": 0 00:11:06.165 }, 00:11:06.165 "claimed": true, 00:11:06.165 "claim_type": "exclusive_write", 00:11:06.165 "zoned": false, 00:11:06.165 "supported_io_types": { 00:11:06.165 "read": true, 00:11:06.165 "write": true, 00:11:06.165 "unmap": true, 00:11:06.165 "flush": true, 00:11:06.165 "reset": true, 00:11:06.165 "nvme_admin": false, 00:11:06.165 "nvme_io": false, 00:11:06.165 "nvme_io_md": false, 00:11:06.165 "write_zeroes": true, 00:11:06.165 "zcopy": true, 00:11:06.165 "get_zone_info": false, 00:11:06.165 "zone_management": false, 00:11:06.165 "zone_append": false, 00:11:06.165 "compare": false, 00:11:06.165 "compare_and_write": false, 00:11:06.165 "abort": true, 00:11:06.165 "seek_hole": false, 00:11:06.165 "seek_data": false, 00:11:06.165 "copy": true, 00:11:06.165 "nvme_iov_md": false 00:11:06.165 }, 00:11:06.165 "memory_domains": [ 00:11:06.165 { 00:11:06.165 "dma_device_id": "system", 00:11:06.165 "dma_device_type": 1 00:11:06.165 }, 00:11:06.165 { 00:11:06.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.165 "dma_device_type": 2 00:11:06.165 } 00:11:06.165 ], 00:11:06.165 "driver_specific": {} 00:11:06.165 } 00:11:06.165 ] 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.165 "name": "Existed_Raid", 00:11:06.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.165 "strip_size_kb": 64, 00:11:06.165 "state": "configuring", 00:11:06.165 "raid_level": "concat", 00:11:06.165 "superblock": false, 00:11:06.165 "num_base_bdevs": 4, 00:11:06.165 "num_base_bdevs_discovered": 2, 00:11:06.165 "num_base_bdevs_operational": 4, 00:11:06.165 "base_bdevs_list": [ 00:11:06.165 { 00:11:06.165 "name": "BaseBdev1", 00:11:06.165 "uuid": "fc5c8108-be35-4682-b373-be551b481cb0", 00:11:06.165 "is_configured": true, 00:11:06.165 "data_offset": 0, 00:11:06.165 "data_size": 65536 00:11:06.165 }, 00:11:06.165 { 00:11:06.165 "name": "BaseBdev2", 00:11:06.165 "uuid": "d21755e4-c70d-4f8e-83d5-b38abc7513fb", 00:11:06.165 "is_configured": true, 00:11:06.165 "data_offset": 0, 00:11:06.165 "data_size": 65536 00:11:06.165 }, 00:11:06.165 { 00:11:06.165 "name": "BaseBdev3", 00:11:06.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.165 "is_configured": false, 00:11:06.165 "data_offset": 0, 00:11:06.165 "data_size": 0 00:11:06.165 }, 00:11:06.165 { 00:11:06.165 "name": "BaseBdev4", 00:11:06.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.165 "is_configured": false, 00:11:06.165 "data_offset": 0, 00:11:06.165 "data_size": 0 00:11:06.165 } 00:11:06.165 ] 00:11:06.165 }' 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.165 12:37:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.435 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.435 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.435 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.710 [2024-12-14 12:37:06.188980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.710 BaseBdev3 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.710 [ 00:11:06.710 { 00:11:06.710 "name": "BaseBdev3", 00:11:06.710 "aliases": [ 00:11:06.710 "e8df8f5f-35aa-4104-9f7f-a030527b87ed" 00:11:06.710 ], 00:11:06.710 "product_name": "Malloc disk", 00:11:06.710 "block_size": 512, 00:11:06.710 "num_blocks": 65536, 00:11:06.710 "uuid": "e8df8f5f-35aa-4104-9f7f-a030527b87ed", 00:11:06.710 "assigned_rate_limits": { 00:11:06.710 "rw_ios_per_sec": 0, 00:11:06.710 "rw_mbytes_per_sec": 0, 00:11:06.710 "r_mbytes_per_sec": 0, 00:11:06.710 "w_mbytes_per_sec": 0 00:11:06.710 }, 00:11:06.710 "claimed": true, 00:11:06.710 "claim_type": "exclusive_write", 00:11:06.710 "zoned": false, 00:11:06.710 "supported_io_types": { 00:11:06.710 "read": true, 00:11:06.710 "write": true, 00:11:06.710 "unmap": true, 00:11:06.710 "flush": true, 00:11:06.710 "reset": true, 00:11:06.710 "nvme_admin": false, 00:11:06.710 "nvme_io": false, 00:11:06.710 "nvme_io_md": false, 00:11:06.710 "write_zeroes": true, 00:11:06.710 "zcopy": true, 00:11:06.710 "get_zone_info": false, 00:11:06.710 "zone_management": false, 00:11:06.710 "zone_append": false, 00:11:06.710 "compare": false, 00:11:06.710 "compare_and_write": false, 00:11:06.710 "abort": true, 00:11:06.710 "seek_hole": false, 00:11:06.710 "seek_data": false, 00:11:06.710 "copy": true, 00:11:06.710 "nvme_iov_md": false 00:11:06.710 }, 00:11:06.710 "memory_domains": [ 00:11:06.710 { 00:11:06.710 "dma_device_id": "system", 00:11:06.710 "dma_device_type": 1 00:11:06.710 }, 00:11:06.710 { 00:11:06.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.710 "dma_device_type": 2 00:11:06.710 } 00:11:06.710 ], 00:11:06.710 "driver_specific": {} 00:11:06.710 } 00:11:06.710 ] 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.710 "name": "Existed_Raid", 00:11:06.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.710 "strip_size_kb": 64, 00:11:06.710 "state": "configuring", 00:11:06.710 "raid_level": "concat", 00:11:06.710 "superblock": false, 00:11:06.710 "num_base_bdevs": 4, 00:11:06.710 "num_base_bdevs_discovered": 3, 00:11:06.710 "num_base_bdevs_operational": 4, 00:11:06.710 "base_bdevs_list": [ 00:11:06.710 { 00:11:06.710 "name": "BaseBdev1", 00:11:06.710 "uuid": "fc5c8108-be35-4682-b373-be551b481cb0", 00:11:06.710 "is_configured": true, 00:11:06.710 "data_offset": 0, 00:11:06.710 "data_size": 65536 00:11:06.710 }, 00:11:06.710 { 00:11:06.710 "name": "BaseBdev2", 00:11:06.710 "uuid": "d21755e4-c70d-4f8e-83d5-b38abc7513fb", 00:11:06.710 "is_configured": true, 00:11:06.710 "data_offset": 0, 00:11:06.710 "data_size": 65536 00:11:06.710 }, 00:11:06.710 { 00:11:06.710 "name": "BaseBdev3", 00:11:06.710 "uuid": "e8df8f5f-35aa-4104-9f7f-a030527b87ed", 00:11:06.710 "is_configured": true, 00:11:06.710 "data_offset": 0, 00:11:06.710 "data_size": 65536 00:11:06.710 }, 00:11:06.710 { 00:11:06.710 "name": "BaseBdev4", 00:11:06.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.710 "is_configured": false, 00:11:06.710 "data_offset": 0, 00:11:06.710 "data_size": 0 00:11:06.710 } 00:11:06.710 ] 00:11:06.710 }' 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.710 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.969 [2024-12-14 12:37:06.684916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.969 [2024-12-14 12:37:06.684974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:06.969 [2024-12-14 12:37:06.684983] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:06.969 [2024-12-14 12:37:06.685279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:06.969 [2024-12-14 12:37:06.685463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:06.969 [2024-12-14 12:37:06.685483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:06.969 [2024-12-14 12:37:06.685751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.969 BaseBdev4 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.969 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.228 [ 00:11:07.228 { 00:11:07.228 "name": "BaseBdev4", 00:11:07.229 "aliases": [ 00:11:07.229 "9a94908c-5f9c-4c21-94f0-3e4dfadd2326" 00:11:07.229 ], 00:11:07.229 "product_name": "Malloc disk", 00:11:07.229 "block_size": 512, 00:11:07.229 "num_blocks": 65536, 00:11:07.229 "uuid": "9a94908c-5f9c-4c21-94f0-3e4dfadd2326", 00:11:07.229 "assigned_rate_limits": { 00:11:07.229 "rw_ios_per_sec": 0, 00:11:07.229 "rw_mbytes_per_sec": 0, 00:11:07.229 "r_mbytes_per_sec": 0, 00:11:07.229 "w_mbytes_per_sec": 0 00:11:07.229 }, 00:11:07.229 "claimed": true, 00:11:07.229 "claim_type": "exclusive_write", 00:11:07.229 "zoned": false, 00:11:07.229 "supported_io_types": { 00:11:07.229 "read": true, 00:11:07.229 "write": true, 00:11:07.229 "unmap": true, 00:11:07.229 "flush": true, 00:11:07.229 "reset": true, 00:11:07.229 "nvme_admin": false, 00:11:07.229 "nvme_io": false, 00:11:07.229 "nvme_io_md": false, 00:11:07.229 "write_zeroes": true, 00:11:07.229 "zcopy": true, 00:11:07.229 "get_zone_info": false, 00:11:07.229 "zone_management": false, 00:11:07.229 "zone_append": false, 00:11:07.229 "compare": false, 00:11:07.229 "compare_and_write": false, 00:11:07.229 "abort": true, 00:11:07.229 "seek_hole": false, 00:11:07.229 "seek_data": false, 00:11:07.229 "copy": true, 00:11:07.229 "nvme_iov_md": false 00:11:07.229 }, 00:11:07.229 "memory_domains": [ 00:11:07.229 { 00:11:07.229 "dma_device_id": "system", 00:11:07.229 "dma_device_type": 1 00:11:07.229 }, 00:11:07.229 { 00:11:07.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.229 "dma_device_type": 2 00:11:07.229 } 00:11:07.229 ], 00:11:07.229 "driver_specific": {} 00:11:07.229 } 00:11:07.229 ] 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.229 "name": "Existed_Raid", 00:11:07.229 "uuid": "76f410a8-6873-46b6-aa16-b758d9170b97", 00:11:07.229 "strip_size_kb": 64, 00:11:07.229 "state": "online", 00:11:07.229 "raid_level": "concat", 00:11:07.229 "superblock": false, 00:11:07.229 "num_base_bdevs": 4, 00:11:07.229 "num_base_bdevs_discovered": 4, 00:11:07.229 "num_base_bdevs_operational": 4, 00:11:07.229 "base_bdevs_list": [ 00:11:07.229 { 00:11:07.229 "name": "BaseBdev1", 00:11:07.229 "uuid": "fc5c8108-be35-4682-b373-be551b481cb0", 00:11:07.229 "is_configured": true, 00:11:07.229 "data_offset": 0, 00:11:07.229 "data_size": 65536 00:11:07.229 }, 00:11:07.229 { 00:11:07.229 "name": "BaseBdev2", 00:11:07.229 "uuid": "d21755e4-c70d-4f8e-83d5-b38abc7513fb", 00:11:07.229 "is_configured": true, 00:11:07.229 "data_offset": 0, 00:11:07.229 "data_size": 65536 00:11:07.229 }, 00:11:07.229 { 00:11:07.229 "name": "BaseBdev3", 00:11:07.229 "uuid": "e8df8f5f-35aa-4104-9f7f-a030527b87ed", 00:11:07.229 "is_configured": true, 00:11:07.229 "data_offset": 0, 00:11:07.229 "data_size": 65536 00:11:07.229 }, 00:11:07.229 { 00:11:07.229 "name": "BaseBdev4", 00:11:07.229 "uuid": "9a94908c-5f9c-4c21-94f0-3e4dfadd2326", 00:11:07.229 "is_configured": true, 00:11:07.229 "data_offset": 0, 00:11:07.229 "data_size": 65536 00:11:07.229 } 00:11:07.229 ] 00:11:07.229 }' 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.229 12:37:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:07.489 [2024-12-14 12:37:07.208484] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.489 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.748 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:07.748 "name": "Existed_Raid", 00:11:07.748 "aliases": [ 00:11:07.748 "76f410a8-6873-46b6-aa16-b758d9170b97" 00:11:07.748 ], 00:11:07.748 "product_name": "Raid Volume", 00:11:07.748 "block_size": 512, 00:11:07.748 "num_blocks": 262144, 00:11:07.748 "uuid": "76f410a8-6873-46b6-aa16-b758d9170b97", 00:11:07.748 "assigned_rate_limits": { 00:11:07.748 "rw_ios_per_sec": 0, 00:11:07.748 "rw_mbytes_per_sec": 0, 00:11:07.748 "r_mbytes_per_sec": 0, 00:11:07.748 "w_mbytes_per_sec": 0 00:11:07.748 }, 00:11:07.748 "claimed": false, 00:11:07.748 "zoned": false, 00:11:07.748 "supported_io_types": { 00:11:07.748 "read": true, 00:11:07.748 "write": true, 00:11:07.748 "unmap": true, 00:11:07.748 "flush": true, 00:11:07.748 "reset": true, 00:11:07.748 "nvme_admin": false, 00:11:07.748 "nvme_io": false, 00:11:07.748 "nvme_io_md": false, 00:11:07.748 "write_zeroes": true, 00:11:07.748 "zcopy": false, 00:11:07.748 "get_zone_info": false, 00:11:07.748 "zone_management": false, 00:11:07.748 "zone_append": false, 00:11:07.748 "compare": false, 00:11:07.748 "compare_and_write": false, 00:11:07.748 "abort": false, 00:11:07.748 "seek_hole": false, 00:11:07.748 "seek_data": false, 00:11:07.748 "copy": false, 00:11:07.748 "nvme_iov_md": false 00:11:07.748 }, 00:11:07.748 "memory_domains": [ 00:11:07.748 { 00:11:07.748 "dma_device_id": "system", 00:11:07.748 "dma_device_type": 1 00:11:07.748 }, 00:11:07.748 { 00:11:07.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.748 "dma_device_type": 2 00:11:07.748 }, 00:11:07.748 { 00:11:07.748 "dma_device_id": "system", 00:11:07.748 "dma_device_type": 1 00:11:07.748 }, 00:11:07.748 { 00:11:07.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.748 "dma_device_type": 2 00:11:07.749 }, 00:11:07.749 { 00:11:07.749 "dma_device_id": "system", 00:11:07.749 "dma_device_type": 1 00:11:07.749 }, 00:11:07.749 { 00:11:07.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.749 "dma_device_type": 2 00:11:07.749 }, 00:11:07.749 { 00:11:07.749 "dma_device_id": "system", 00:11:07.749 "dma_device_type": 1 00:11:07.749 }, 00:11:07.749 { 00:11:07.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.749 "dma_device_type": 2 00:11:07.749 } 00:11:07.749 ], 00:11:07.749 "driver_specific": { 00:11:07.749 "raid": { 00:11:07.749 "uuid": "76f410a8-6873-46b6-aa16-b758d9170b97", 00:11:07.749 "strip_size_kb": 64, 00:11:07.749 "state": "online", 00:11:07.749 "raid_level": "concat", 00:11:07.749 "superblock": false, 00:11:07.749 "num_base_bdevs": 4, 00:11:07.749 "num_base_bdevs_discovered": 4, 00:11:07.749 "num_base_bdevs_operational": 4, 00:11:07.749 "base_bdevs_list": [ 00:11:07.749 { 00:11:07.749 "name": "BaseBdev1", 00:11:07.749 "uuid": "fc5c8108-be35-4682-b373-be551b481cb0", 00:11:07.749 "is_configured": true, 00:11:07.749 "data_offset": 0, 00:11:07.749 "data_size": 65536 00:11:07.749 }, 00:11:07.749 { 00:11:07.749 "name": "BaseBdev2", 00:11:07.749 "uuid": "d21755e4-c70d-4f8e-83d5-b38abc7513fb", 00:11:07.749 "is_configured": true, 00:11:07.749 "data_offset": 0, 00:11:07.749 "data_size": 65536 00:11:07.749 }, 00:11:07.749 { 00:11:07.749 "name": "BaseBdev3", 00:11:07.749 "uuid": "e8df8f5f-35aa-4104-9f7f-a030527b87ed", 00:11:07.749 "is_configured": true, 00:11:07.749 "data_offset": 0, 00:11:07.749 "data_size": 65536 00:11:07.749 }, 00:11:07.749 { 00:11:07.749 "name": "BaseBdev4", 00:11:07.749 "uuid": "9a94908c-5f9c-4c21-94f0-3e4dfadd2326", 00:11:07.749 "is_configured": true, 00:11:07.749 "data_offset": 0, 00:11:07.749 "data_size": 65536 00:11:07.749 } 00:11:07.749 ] 00:11:07.749 } 00:11:07.749 } 00:11:07.749 }' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:07.749 BaseBdev2 00:11:07.749 BaseBdev3 00:11:07.749 BaseBdev4' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:07.749 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.009 [2024-12-14 12:37:07.507636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.009 [2024-12-14 12:37:07.507671] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.009 [2024-12-14 12:37:07.507721] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.009 "name": "Existed_Raid", 00:11:08.009 "uuid": "76f410a8-6873-46b6-aa16-b758d9170b97", 00:11:08.009 "strip_size_kb": 64, 00:11:08.009 "state": "offline", 00:11:08.009 "raid_level": "concat", 00:11:08.009 "superblock": false, 00:11:08.009 "num_base_bdevs": 4, 00:11:08.009 "num_base_bdevs_discovered": 3, 00:11:08.009 "num_base_bdevs_operational": 3, 00:11:08.009 "base_bdevs_list": [ 00:11:08.009 { 00:11:08.009 "name": null, 00:11:08.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.009 "is_configured": false, 00:11:08.009 "data_offset": 0, 00:11:08.009 "data_size": 65536 00:11:08.009 }, 00:11:08.009 { 00:11:08.009 "name": "BaseBdev2", 00:11:08.009 "uuid": "d21755e4-c70d-4f8e-83d5-b38abc7513fb", 00:11:08.009 "is_configured": true, 00:11:08.009 "data_offset": 0, 00:11:08.009 "data_size": 65536 00:11:08.009 }, 00:11:08.009 { 00:11:08.009 "name": "BaseBdev3", 00:11:08.009 "uuid": "e8df8f5f-35aa-4104-9f7f-a030527b87ed", 00:11:08.009 "is_configured": true, 00:11:08.009 "data_offset": 0, 00:11:08.009 "data_size": 65536 00:11:08.009 }, 00:11:08.009 { 00:11:08.009 "name": "BaseBdev4", 00:11:08.009 "uuid": "9a94908c-5f9c-4c21-94f0-3e4dfadd2326", 00:11:08.009 "is_configured": true, 00:11:08.009 "data_offset": 0, 00:11:08.009 "data_size": 65536 00:11:08.009 } 00:11:08.009 ] 00:11:08.009 }' 00:11:08.009 12:37:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.010 12:37:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 [2024-12-14 12:37:08.087541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.578 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.578 [2024-12-14 12:37:08.238587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.838 [2024-12-14 12:37:08.387377] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:08.838 [2024-12-14 12:37:08.387436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.838 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.099 BaseBdev2 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.099 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 [ 00:11:09.100 { 00:11:09.100 "name": "BaseBdev2", 00:11:09.100 "aliases": [ 00:11:09.100 "2b66b376-1927-41b0-be1b-5ee66ddc3f84" 00:11:09.100 ], 00:11:09.100 "product_name": "Malloc disk", 00:11:09.100 "block_size": 512, 00:11:09.100 "num_blocks": 65536, 00:11:09.100 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:09.100 "assigned_rate_limits": { 00:11:09.100 "rw_ios_per_sec": 0, 00:11:09.100 "rw_mbytes_per_sec": 0, 00:11:09.100 "r_mbytes_per_sec": 0, 00:11:09.100 "w_mbytes_per_sec": 0 00:11:09.100 }, 00:11:09.100 "claimed": false, 00:11:09.100 "zoned": false, 00:11:09.100 "supported_io_types": { 00:11:09.100 "read": true, 00:11:09.100 "write": true, 00:11:09.100 "unmap": true, 00:11:09.100 "flush": true, 00:11:09.100 "reset": true, 00:11:09.100 "nvme_admin": false, 00:11:09.100 "nvme_io": false, 00:11:09.100 "nvme_io_md": false, 00:11:09.100 "write_zeroes": true, 00:11:09.100 "zcopy": true, 00:11:09.100 "get_zone_info": false, 00:11:09.100 "zone_management": false, 00:11:09.100 "zone_append": false, 00:11:09.100 "compare": false, 00:11:09.100 "compare_and_write": false, 00:11:09.100 "abort": true, 00:11:09.100 "seek_hole": false, 00:11:09.100 "seek_data": false, 00:11:09.100 "copy": true, 00:11:09.100 "nvme_iov_md": false 00:11:09.100 }, 00:11:09.100 "memory_domains": [ 00:11:09.100 { 00:11:09.100 "dma_device_id": "system", 00:11:09.100 "dma_device_type": 1 00:11:09.100 }, 00:11:09.100 { 00:11:09.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.100 "dma_device_type": 2 00:11:09.100 } 00:11:09.100 ], 00:11:09.100 "driver_specific": {} 00:11:09.100 } 00:11:09.100 ] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 BaseBdev3 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 [ 00:11:09.100 { 00:11:09.100 "name": "BaseBdev3", 00:11:09.100 "aliases": [ 00:11:09.100 "131c927d-01ab-4a30-849a-53d520788387" 00:11:09.100 ], 00:11:09.100 "product_name": "Malloc disk", 00:11:09.100 "block_size": 512, 00:11:09.100 "num_blocks": 65536, 00:11:09.100 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:09.100 "assigned_rate_limits": { 00:11:09.100 "rw_ios_per_sec": 0, 00:11:09.100 "rw_mbytes_per_sec": 0, 00:11:09.100 "r_mbytes_per_sec": 0, 00:11:09.100 "w_mbytes_per_sec": 0 00:11:09.100 }, 00:11:09.100 "claimed": false, 00:11:09.100 "zoned": false, 00:11:09.100 "supported_io_types": { 00:11:09.100 "read": true, 00:11:09.100 "write": true, 00:11:09.100 "unmap": true, 00:11:09.100 "flush": true, 00:11:09.100 "reset": true, 00:11:09.100 "nvme_admin": false, 00:11:09.100 "nvme_io": false, 00:11:09.100 "nvme_io_md": false, 00:11:09.100 "write_zeroes": true, 00:11:09.100 "zcopy": true, 00:11:09.100 "get_zone_info": false, 00:11:09.100 "zone_management": false, 00:11:09.100 "zone_append": false, 00:11:09.100 "compare": false, 00:11:09.100 "compare_and_write": false, 00:11:09.100 "abort": true, 00:11:09.100 "seek_hole": false, 00:11:09.100 "seek_data": false, 00:11:09.100 "copy": true, 00:11:09.100 "nvme_iov_md": false 00:11:09.100 }, 00:11:09.100 "memory_domains": [ 00:11:09.100 { 00:11:09.100 "dma_device_id": "system", 00:11:09.100 "dma_device_type": 1 00:11:09.100 }, 00:11:09.100 { 00:11:09.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.100 "dma_device_type": 2 00:11:09.100 } 00:11:09.100 ], 00:11:09.100 "driver_specific": {} 00:11:09.100 } 00:11:09.100 ] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 BaseBdev4 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 [ 00:11:09.100 { 00:11:09.100 "name": "BaseBdev4", 00:11:09.100 "aliases": [ 00:11:09.100 "fa2d71c9-0b6c-49ba-ace8-a550d4244d63" 00:11:09.100 ], 00:11:09.100 "product_name": "Malloc disk", 00:11:09.100 "block_size": 512, 00:11:09.100 "num_blocks": 65536, 00:11:09.100 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:09.100 "assigned_rate_limits": { 00:11:09.100 "rw_ios_per_sec": 0, 00:11:09.100 "rw_mbytes_per_sec": 0, 00:11:09.100 "r_mbytes_per_sec": 0, 00:11:09.100 "w_mbytes_per_sec": 0 00:11:09.100 }, 00:11:09.100 "claimed": false, 00:11:09.100 "zoned": false, 00:11:09.100 "supported_io_types": { 00:11:09.100 "read": true, 00:11:09.100 "write": true, 00:11:09.100 "unmap": true, 00:11:09.100 "flush": true, 00:11:09.100 "reset": true, 00:11:09.100 "nvme_admin": false, 00:11:09.100 "nvme_io": false, 00:11:09.100 "nvme_io_md": false, 00:11:09.100 "write_zeroes": true, 00:11:09.100 "zcopy": true, 00:11:09.100 "get_zone_info": false, 00:11:09.100 "zone_management": false, 00:11:09.100 "zone_append": false, 00:11:09.100 "compare": false, 00:11:09.100 "compare_and_write": false, 00:11:09.100 "abort": true, 00:11:09.100 "seek_hole": false, 00:11:09.100 "seek_data": false, 00:11:09.100 "copy": true, 00:11:09.100 "nvme_iov_md": false 00:11:09.100 }, 00:11:09.100 "memory_domains": [ 00:11:09.100 { 00:11:09.100 "dma_device_id": "system", 00:11:09.100 "dma_device_type": 1 00:11:09.100 }, 00:11:09.100 { 00:11:09.100 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.100 "dma_device_type": 2 00:11:09.100 } 00:11:09.100 ], 00:11:09.100 "driver_specific": {} 00:11:09.100 } 00:11:09.100 ] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.100 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.100 [2024-12-14 12:37:08.785534] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:09.101 [2024-12-14 12:37:08.785578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:09.101 [2024-12-14 12:37:08.785604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.101 [2024-12-14 12:37:08.787470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:09.101 [2024-12-14 12:37:08.787528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.101 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.360 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.360 "name": "Existed_Raid", 00:11:09.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.360 "strip_size_kb": 64, 00:11:09.360 "state": "configuring", 00:11:09.360 "raid_level": "concat", 00:11:09.360 "superblock": false, 00:11:09.360 "num_base_bdevs": 4, 00:11:09.360 "num_base_bdevs_discovered": 3, 00:11:09.360 "num_base_bdevs_operational": 4, 00:11:09.360 "base_bdevs_list": [ 00:11:09.360 { 00:11:09.360 "name": "BaseBdev1", 00:11:09.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.360 "is_configured": false, 00:11:09.360 "data_offset": 0, 00:11:09.360 "data_size": 0 00:11:09.360 }, 00:11:09.360 { 00:11:09.360 "name": "BaseBdev2", 00:11:09.360 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:09.360 "is_configured": true, 00:11:09.360 "data_offset": 0, 00:11:09.360 "data_size": 65536 00:11:09.360 }, 00:11:09.360 { 00:11:09.360 "name": "BaseBdev3", 00:11:09.360 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:09.360 "is_configured": true, 00:11:09.360 "data_offset": 0, 00:11:09.360 "data_size": 65536 00:11:09.360 }, 00:11:09.360 { 00:11:09.360 "name": "BaseBdev4", 00:11:09.360 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:09.360 "is_configured": true, 00:11:09.360 "data_offset": 0, 00:11:09.360 "data_size": 65536 00:11:09.360 } 00:11:09.360 ] 00:11:09.360 }' 00:11:09.360 12:37:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.360 12:37:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.620 [2024-12-14 12:37:09.228797] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.620 "name": "Existed_Raid", 00:11:09.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.620 "strip_size_kb": 64, 00:11:09.620 "state": "configuring", 00:11:09.620 "raid_level": "concat", 00:11:09.620 "superblock": false, 00:11:09.620 "num_base_bdevs": 4, 00:11:09.620 "num_base_bdevs_discovered": 2, 00:11:09.620 "num_base_bdevs_operational": 4, 00:11:09.620 "base_bdevs_list": [ 00:11:09.620 { 00:11:09.620 "name": "BaseBdev1", 00:11:09.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.620 "is_configured": false, 00:11:09.620 "data_offset": 0, 00:11:09.620 "data_size": 0 00:11:09.620 }, 00:11:09.620 { 00:11:09.620 "name": null, 00:11:09.620 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:09.620 "is_configured": false, 00:11:09.620 "data_offset": 0, 00:11:09.620 "data_size": 65536 00:11:09.620 }, 00:11:09.620 { 00:11:09.620 "name": "BaseBdev3", 00:11:09.620 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:09.620 "is_configured": true, 00:11:09.620 "data_offset": 0, 00:11:09.620 "data_size": 65536 00:11:09.620 }, 00:11:09.620 { 00:11:09.620 "name": "BaseBdev4", 00:11:09.620 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:09.620 "is_configured": true, 00:11:09.620 "data_offset": 0, 00:11:09.620 "data_size": 65536 00:11:09.620 } 00:11:09.620 ] 00:11:09.620 }' 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.620 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.189 [2024-12-14 12:37:09.730746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:10.189 BaseBdev1 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.189 [ 00:11:10.189 { 00:11:10.189 "name": "BaseBdev1", 00:11:10.189 "aliases": [ 00:11:10.189 "7134f29e-632c-4fed-9903-5f80ac1f2569" 00:11:10.189 ], 00:11:10.189 "product_name": "Malloc disk", 00:11:10.189 "block_size": 512, 00:11:10.189 "num_blocks": 65536, 00:11:10.189 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:10.189 "assigned_rate_limits": { 00:11:10.189 "rw_ios_per_sec": 0, 00:11:10.189 "rw_mbytes_per_sec": 0, 00:11:10.189 "r_mbytes_per_sec": 0, 00:11:10.189 "w_mbytes_per_sec": 0 00:11:10.189 }, 00:11:10.189 "claimed": true, 00:11:10.189 "claim_type": "exclusive_write", 00:11:10.189 "zoned": false, 00:11:10.189 "supported_io_types": { 00:11:10.189 "read": true, 00:11:10.189 "write": true, 00:11:10.189 "unmap": true, 00:11:10.189 "flush": true, 00:11:10.189 "reset": true, 00:11:10.189 "nvme_admin": false, 00:11:10.189 "nvme_io": false, 00:11:10.189 "nvme_io_md": false, 00:11:10.189 "write_zeroes": true, 00:11:10.189 "zcopy": true, 00:11:10.189 "get_zone_info": false, 00:11:10.189 "zone_management": false, 00:11:10.189 "zone_append": false, 00:11:10.189 "compare": false, 00:11:10.189 "compare_and_write": false, 00:11:10.189 "abort": true, 00:11:10.189 "seek_hole": false, 00:11:10.189 "seek_data": false, 00:11:10.189 "copy": true, 00:11:10.189 "nvme_iov_md": false 00:11:10.189 }, 00:11:10.189 "memory_domains": [ 00:11:10.189 { 00:11:10.189 "dma_device_id": "system", 00:11:10.189 "dma_device_type": 1 00:11:10.189 }, 00:11:10.189 { 00:11:10.189 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.189 "dma_device_type": 2 00:11:10.189 } 00:11:10.189 ], 00:11:10.189 "driver_specific": {} 00:11:10.189 } 00:11:10.189 ] 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.189 "name": "Existed_Raid", 00:11:10.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.189 "strip_size_kb": 64, 00:11:10.189 "state": "configuring", 00:11:10.189 "raid_level": "concat", 00:11:10.189 "superblock": false, 00:11:10.189 "num_base_bdevs": 4, 00:11:10.189 "num_base_bdevs_discovered": 3, 00:11:10.189 "num_base_bdevs_operational": 4, 00:11:10.189 "base_bdevs_list": [ 00:11:10.189 { 00:11:10.189 "name": "BaseBdev1", 00:11:10.189 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:10.189 "is_configured": true, 00:11:10.189 "data_offset": 0, 00:11:10.189 "data_size": 65536 00:11:10.189 }, 00:11:10.189 { 00:11:10.189 "name": null, 00:11:10.189 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:10.189 "is_configured": false, 00:11:10.189 "data_offset": 0, 00:11:10.189 "data_size": 65536 00:11:10.189 }, 00:11:10.189 { 00:11:10.189 "name": "BaseBdev3", 00:11:10.189 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:10.189 "is_configured": true, 00:11:10.189 "data_offset": 0, 00:11:10.189 "data_size": 65536 00:11:10.189 }, 00:11:10.189 { 00:11:10.189 "name": "BaseBdev4", 00:11:10.189 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:10.189 "is_configured": true, 00:11:10.189 "data_offset": 0, 00:11:10.189 "data_size": 65536 00:11:10.189 } 00:11:10.189 ] 00:11:10.189 }' 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.189 12:37:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.760 [2024-12-14 12:37:10.281914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.760 "name": "Existed_Raid", 00:11:10.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.760 "strip_size_kb": 64, 00:11:10.760 "state": "configuring", 00:11:10.760 "raid_level": "concat", 00:11:10.760 "superblock": false, 00:11:10.760 "num_base_bdevs": 4, 00:11:10.760 "num_base_bdevs_discovered": 2, 00:11:10.760 "num_base_bdevs_operational": 4, 00:11:10.760 "base_bdevs_list": [ 00:11:10.760 { 00:11:10.760 "name": "BaseBdev1", 00:11:10.760 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:10.760 "is_configured": true, 00:11:10.760 "data_offset": 0, 00:11:10.760 "data_size": 65536 00:11:10.760 }, 00:11:10.760 { 00:11:10.760 "name": null, 00:11:10.760 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:10.760 "is_configured": false, 00:11:10.760 "data_offset": 0, 00:11:10.760 "data_size": 65536 00:11:10.760 }, 00:11:10.760 { 00:11:10.760 "name": null, 00:11:10.760 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:10.760 "is_configured": false, 00:11:10.760 "data_offset": 0, 00:11:10.760 "data_size": 65536 00:11:10.760 }, 00:11:10.760 { 00:11:10.760 "name": "BaseBdev4", 00:11:10.760 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:10.760 "is_configured": true, 00:11:10.760 "data_offset": 0, 00:11:10.760 "data_size": 65536 00:11:10.760 } 00:11:10.760 ] 00:11:10.760 }' 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.760 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.019 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.019 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.020 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.020 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.020 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.020 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:11.020 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:11.020 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.020 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.286 [2024-12-14 12:37:10.757155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.286 "name": "Existed_Raid", 00:11:11.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.286 "strip_size_kb": 64, 00:11:11.286 "state": "configuring", 00:11:11.286 "raid_level": "concat", 00:11:11.286 "superblock": false, 00:11:11.286 "num_base_bdevs": 4, 00:11:11.286 "num_base_bdevs_discovered": 3, 00:11:11.286 "num_base_bdevs_operational": 4, 00:11:11.286 "base_bdevs_list": [ 00:11:11.286 { 00:11:11.286 "name": "BaseBdev1", 00:11:11.286 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:11.286 "is_configured": true, 00:11:11.286 "data_offset": 0, 00:11:11.286 "data_size": 65536 00:11:11.286 }, 00:11:11.286 { 00:11:11.286 "name": null, 00:11:11.286 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:11.286 "is_configured": false, 00:11:11.286 "data_offset": 0, 00:11:11.286 "data_size": 65536 00:11:11.286 }, 00:11:11.286 { 00:11:11.286 "name": "BaseBdev3", 00:11:11.286 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:11.286 "is_configured": true, 00:11:11.286 "data_offset": 0, 00:11:11.286 "data_size": 65536 00:11:11.286 }, 00:11:11.286 { 00:11:11.286 "name": "BaseBdev4", 00:11:11.286 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:11.286 "is_configured": true, 00:11:11.286 "data_offset": 0, 00:11:11.286 "data_size": 65536 00:11:11.286 } 00:11:11.286 ] 00:11:11.286 }' 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.286 12:37:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.549 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.549 [2024-12-14 12:37:11.256307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.808 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.808 "name": "Existed_Raid", 00:11:11.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.808 "strip_size_kb": 64, 00:11:11.808 "state": "configuring", 00:11:11.808 "raid_level": "concat", 00:11:11.808 "superblock": false, 00:11:11.808 "num_base_bdevs": 4, 00:11:11.808 "num_base_bdevs_discovered": 2, 00:11:11.808 "num_base_bdevs_operational": 4, 00:11:11.808 "base_bdevs_list": [ 00:11:11.808 { 00:11:11.808 "name": null, 00:11:11.808 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:11.808 "is_configured": false, 00:11:11.808 "data_offset": 0, 00:11:11.808 "data_size": 65536 00:11:11.808 }, 00:11:11.808 { 00:11:11.808 "name": null, 00:11:11.808 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:11.808 "is_configured": false, 00:11:11.809 "data_offset": 0, 00:11:11.809 "data_size": 65536 00:11:11.809 }, 00:11:11.809 { 00:11:11.809 "name": "BaseBdev3", 00:11:11.809 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:11.809 "is_configured": true, 00:11:11.809 "data_offset": 0, 00:11:11.809 "data_size": 65536 00:11:11.809 }, 00:11:11.809 { 00:11:11.809 "name": "BaseBdev4", 00:11:11.809 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:11.809 "is_configured": true, 00:11:11.809 "data_offset": 0, 00:11:11.809 "data_size": 65536 00:11:11.809 } 00:11:11.809 ] 00:11:11.809 }' 00:11:11.809 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.809 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.068 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:12.068 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.068 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.068 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.327 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.327 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:12.327 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.328 [2024-12-14 12:37:11.816154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.328 "name": "Existed_Raid", 00:11:12.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.328 "strip_size_kb": 64, 00:11:12.328 "state": "configuring", 00:11:12.328 "raid_level": "concat", 00:11:12.328 "superblock": false, 00:11:12.328 "num_base_bdevs": 4, 00:11:12.328 "num_base_bdevs_discovered": 3, 00:11:12.328 "num_base_bdevs_operational": 4, 00:11:12.328 "base_bdevs_list": [ 00:11:12.328 { 00:11:12.328 "name": null, 00:11:12.328 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:12.328 "is_configured": false, 00:11:12.328 "data_offset": 0, 00:11:12.328 "data_size": 65536 00:11:12.328 }, 00:11:12.328 { 00:11:12.328 "name": "BaseBdev2", 00:11:12.328 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:12.328 "is_configured": true, 00:11:12.328 "data_offset": 0, 00:11:12.328 "data_size": 65536 00:11:12.328 }, 00:11:12.328 { 00:11:12.328 "name": "BaseBdev3", 00:11:12.328 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:12.328 "is_configured": true, 00:11:12.328 "data_offset": 0, 00:11:12.328 "data_size": 65536 00:11:12.328 }, 00:11:12.328 { 00:11:12.328 "name": "BaseBdev4", 00:11:12.328 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:12.328 "is_configured": true, 00:11:12.328 "data_offset": 0, 00:11:12.328 "data_size": 65536 00:11:12.328 } 00:11:12.328 ] 00:11:12.328 }' 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.328 12:37:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.587 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7134f29e-632c-4fed-9903-5f80ac1f2569 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.847 [2024-12-14 12:37:12.375464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:12.847 [2024-12-14 12:37:12.375533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:12.847 [2024-12-14 12:37:12.375541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:12.847 [2024-12-14 12:37:12.375827] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:12.847 [2024-12-14 12:37:12.375977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:12.847 [2024-12-14 12:37:12.375993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:12.847 [2024-12-14 12:37:12.376270] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.847 NewBaseBdev 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.847 [ 00:11:12.847 { 00:11:12.847 "name": "NewBaseBdev", 00:11:12.847 "aliases": [ 00:11:12.847 "7134f29e-632c-4fed-9903-5f80ac1f2569" 00:11:12.847 ], 00:11:12.847 "product_name": "Malloc disk", 00:11:12.847 "block_size": 512, 00:11:12.847 "num_blocks": 65536, 00:11:12.847 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:12.847 "assigned_rate_limits": { 00:11:12.847 "rw_ios_per_sec": 0, 00:11:12.847 "rw_mbytes_per_sec": 0, 00:11:12.847 "r_mbytes_per_sec": 0, 00:11:12.847 "w_mbytes_per_sec": 0 00:11:12.847 }, 00:11:12.847 "claimed": true, 00:11:12.847 "claim_type": "exclusive_write", 00:11:12.847 "zoned": false, 00:11:12.847 "supported_io_types": { 00:11:12.847 "read": true, 00:11:12.847 "write": true, 00:11:12.847 "unmap": true, 00:11:12.847 "flush": true, 00:11:12.847 "reset": true, 00:11:12.847 "nvme_admin": false, 00:11:12.847 "nvme_io": false, 00:11:12.847 "nvme_io_md": false, 00:11:12.847 "write_zeroes": true, 00:11:12.847 "zcopy": true, 00:11:12.847 "get_zone_info": false, 00:11:12.847 "zone_management": false, 00:11:12.847 "zone_append": false, 00:11:12.847 "compare": false, 00:11:12.847 "compare_and_write": false, 00:11:12.847 "abort": true, 00:11:12.847 "seek_hole": false, 00:11:12.847 "seek_data": false, 00:11:12.847 "copy": true, 00:11:12.847 "nvme_iov_md": false 00:11:12.847 }, 00:11:12.847 "memory_domains": [ 00:11:12.847 { 00:11:12.847 "dma_device_id": "system", 00:11:12.847 "dma_device_type": 1 00:11:12.847 }, 00:11:12.847 { 00:11:12.847 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.847 "dma_device_type": 2 00:11:12.847 } 00:11:12.847 ], 00:11:12.847 "driver_specific": {} 00:11:12.847 } 00:11:12.847 ] 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.847 "name": "Existed_Raid", 00:11:12.847 "uuid": "95d2abd6-97d7-4b48-b76e-0c4a1ad5b79a", 00:11:12.847 "strip_size_kb": 64, 00:11:12.847 "state": "online", 00:11:12.847 "raid_level": "concat", 00:11:12.847 "superblock": false, 00:11:12.847 "num_base_bdevs": 4, 00:11:12.847 "num_base_bdevs_discovered": 4, 00:11:12.847 "num_base_bdevs_operational": 4, 00:11:12.847 "base_bdevs_list": [ 00:11:12.847 { 00:11:12.847 "name": "NewBaseBdev", 00:11:12.847 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:12.847 "is_configured": true, 00:11:12.847 "data_offset": 0, 00:11:12.847 "data_size": 65536 00:11:12.847 }, 00:11:12.847 { 00:11:12.847 "name": "BaseBdev2", 00:11:12.847 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:12.847 "is_configured": true, 00:11:12.847 "data_offset": 0, 00:11:12.847 "data_size": 65536 00:11:12.847 }, 00:11:12.847 { 00:11:12.847 "name": "BaseBdev3", 00:11:12.847 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:12.847 "is_configured": true, 00:11:12.847 "data_offset": 0, 00:11:12.847 "data_size": 65536 00:11:12.847 }, 00:11:12.847 { 00:11:12.847 "name": "BaseBdev4", 00:11:12.847 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:12.847 "is_configured": true, 00:11:12.847 "data_offset": 0, 00:11:12.847 "data_size": 65536 00:11:12.847 } 00:11:12.847 ] 00:11:12.847 }' 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.847 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.107 [2024-12-14 12:37:12.811132] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.107 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.366 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.366 "name": "Existed_Raid", 00:11:13.366 "aliases": [ 00:11:13.366 "95d2abd6-97d7-4b48-b76e-0c4a1ad5b79a" 00:11:13.366 ], 00:11:13.366 "product_name": "Raid Volume", 00:11:13.366 "block_size": 512, 00:11:13.366 "num_blocks": 262144, 00:11:13.366 "uuid": "95d2abd6-97d7-4b48-b76e-0c4a1ad5b79a", 00:11:13.366 "assigned_rate_limits": { 00:11:13.366 "rw_ios_per_sec": 0, 00:11:13.366 "rw_mbytes_per_sec": 0, 00:11:13.366 "r_mbytes_per_sec": 0, 00:11:13.366 "w_mbytes_per_sec": 0 00:11:13.366 }, 00:11:13.366 "claimed": false, 00:11:13.366 "zoned": false, 00:11:13.366 "supported_io_types": { 00:11:13.366 "read": true, 00:11:13.366 "write": true, 00:11:13.366 "unmap": true, 00:11:13.366 "flush": true, 00:11:13.366 "reset": true, 00:11:13.366 "nvme_admin": false, 00:11:13.366 "nvme_io": false, 00:11:13.366 "nvme_io_md": false, 00:11:13.366 "write_zeroes": true, 00:11:13.366 "zcopy": false, 00:11:13.366 "get_zone_info": false, 00:11:13.366 "zone_management": false, 00:11:13.366 "zone_append": false, 00:11:13.366 "compare": false, 00:11:13.366 "compare_and_write": false, 00:11:13.366 "abort": false, 00:11:13.366 "seek_hole": false, 00:11:13.366 "seek_data": false, 00:11:13.366 "copy": false, 00:11:13.366 "nvme_iov_md": false 00:11:13.366 }, 00:11:13.366 "memory_domains": [ 00:11:13.366 { 00:11:13.366 "dma_device_id": "system", 00:11:13.366 "dma_device_type": 1 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.366 "dma_device_type": 2 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "dma_device_id": "system", 00:11:13.366 "dma_device_type": 1 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.366 "dma_device_type": 2 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "dma_device_id": "system", 00:11:13.366 "dma_device_type": 1 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.366 "dma_device_type": 2 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "dma_device_id": "system", 00:11:13.366 "dma_device_type": 1 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.366 "dma_device_type": 2 00:11:13.366 } 00:11:13.366 ], 00:11:13.366 "driver_specific": { 00:11:13.366 "raid": { 00:11:13.366 "uuid": "95d2abd6-97d7-4b48-b76e-0c4a1ad5b79a", 00:11:13.366 "strip_size_kb": 64, 00:11:13.366 "state": "online", 00:11:13.366 "raid_level": "concat", 00:11:13.366 "superblock": false, 00:11:13.366 "num_base_bdevs": 4, 00:11:13.366 "num_base_bdevs_discovered": 4, 00:11:13.366 "num_base_bdevs_operational": 4, 00:11:13.366 "base_bdevs_list": [ 00:11:13.366 { 00:11:13.366 "name": "NewBaseBdev", 00:11:13.366 "uuid": "7134f29e-632c-4fed-9903-5f80ac1f2569", 00:11:13.366 "is_configured": true, 00:11:13.366 "data_offset": 0, 00:11:13.366 "data_size": 65536 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "name": "BaseBdev2", 00:11:13.366 "uuid": "2b66b376-1927-41b0-be1b-5ee66ddc3f84", 00:11:13.366 "is_configured": true, 00:11:13.366 "data_offset": 0, 00:11:13.366 "data_size": 65536 00:11:13.366 }, 00:11:13.366 { 00:11:13.366 "name": "BaseBdev3", 00:11:13.366 "uuid": "131c927d-01ab-4a30-849a-53d520788387", 00:11:13.366 "is_configured": true, 00:11:13.366 "data_offset": 0, 00:11:13.367 "data_size": 65536 00:11:13.367 }, 00:11:13.367 { 00:11:13.367 "name": "BaseBdev4", 00:11:13.367 "uuid": "fa2d71c9-0b6c-49ba-ace8-a550d4244d63", 00:11:13.367 "is_configured": true, 00:11:13.367 "data_offset": 0, 00:11:13.367 "data_size": 65536 00:11:13.367 } 00:11:13.367 ] 00:11:13.367 } 00:11:13.367 } 00:11:13.367 }' 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:13.367 BaseBdev2 00:11:13.367 BaseBdev3 00:11:13.367 BaseBdev4' 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.367 12:37:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.367 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.626 [2024-12-14 12:37:13.110331] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.626 [2024-12-14 12:37:13.110367] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.626 [2024-12-14 12:37:13.110457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.626 [2024-12-14 12:37:13.110528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.626 [2024-12-14 12:37:13.110561] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73060 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73060 ']' 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73060 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73060 00:11:13.626 killing process with pid 73060 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73060' 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73060 00:11:13.626 [2024-12-14 12:37:13.157674] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:13.626 12:37:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73060 00:11:13.886 [2024-12-14 12:37:13.560284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:15.301 00:11:15.301 real 0m11.482s 00:11:15.301 user 0m18.275s 00:11:15.301 sys 0m1.984s 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.301 ************************************ 00:11:15.301 END TEST raid_state_function_test 00:11:15.301 ************************************ 00:11:15.301 12:37:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:15.301 12:37:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:15.301 12:37:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.301 12:37:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.301 ************************************ 00:11:15.301 START TEST raid_state_function_test_sb 00:11:15.301 ************************************ 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73733 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73733' 00:11:15.301 Process raid pid: 73733 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73733 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73733 ']' 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.301 12:37:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.301 [2024-12-14 12:37:14.860016] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:15.301 [2024-12-14 12:37:14.860156] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.562 [2024-12-14 12:37:15.037403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.562 [2024-12-14 12:37:15.150791] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.822 [2024-12-14 12:37:15.354272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:15.822 [2024-12-14 12:37:15.354320] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.081 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.081 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:16.081 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.081 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.081 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.081 [2024-12-14 12:37:15.705936] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.082 [2024-12-14 12:37:15.705991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.082 [2024-12-14 12:37:15.706001] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.082 [2024-12-14 12:37:15.706011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.082 [2024-12-14 12:37:15.706022] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.082 [2024-12-14 12:37:15.706031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.082 [2024-12-14 12:37:15.706037] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.082 [2024-12-14 12:37:15.706057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.082 "name": "Existed_Raid", 00:11:16.082 "uuid": "503b4853-2341-4376-9c96-055075d34c03", 00:11:16.082 "strip_size_kb": 64, 00:11:16.082 "state": "configuring", 00:11:16.082 "raid_level": "concat", 00:11:16.082 "superblock": true, 00:11:16.082 "num_base_bdevs": 4, 00:11:16.082 "num_base_bdevs_discovered": 0, 00:11:16.082 "num_base_bdevs_operational": 4, 00:11:16.082 "base_bdevs_list": [ 00:11:16.082 { 00:11:16.082 "name": "BaseBdev1", 00:11:16.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.082 "is_configured": false, 00:11:16.082 "data_offset": 0, 00:11:16.082 "data_size": 0 00:11:16.082 }, 00:11:16.082 { 00:11:16.082 "name": "BaseBdev2", 00:11:16.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.082 "is_configured": false, 00:11:16.082 "data_offset": 0, 00:11:16.082 "data_size": 0 00:11:16.082 }, 00:11:16.082 { 00:11:16.082 "name": "BaseBdev3", 00:11:16.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.082 "is_configured": false, 00:11:16.082 "data_offset": 0, 00:11:16.082 "data_size": 0 00:11:16.082 }, 00:11:16.082 { 00:11:16.082 "name": "BaseBdev4", 00:11:16.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.082 "is_configured": false, 00:11:16.082 "data_offset": 0, 00:11:16.082 "data_size": 0 00:11:16.082 } 00:11:16.082 ] 00:11:16.082 }' 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.082 12:37:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 [2024-12-14 12:37:16.085231] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.651 [2024-12-14 12:37:16.085275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 [2024-12-14 12:37:16.097225] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:16.651 [2024-12-14 12:37:16.097265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:16.651 [2024-12-14 12:37:16.097274] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.651 [2024-12-14 12:37:16.097283] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.651 [2024-12-14 12:37:16.097289] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.651 [2024-12-14 12:37:16.097299] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.651 [2024-12-14 12:37:16.097305] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.651 [2024-12-14 12:37:16.097313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 [2024-12-14 12:37:16.145006] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.651 BaseBdev1 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.651 [ 00:11:16.651 { 00:11:16.651 "name": "BaseBdev1", 00:11:16.651 "aliases": [ 00:11:16.651 "3938e4f9-fe83-4f14-8dc2-c8e7a71b3e6a" 00:11:16.651 ], 00:11:16.651 "product_name": "Malloc disk", 00:11:16.651 "block_size": 512, 00:11:16.651 "num_blocks": 65536, 00:11:16.651 "uuid": "3938e4f9-fe83-4f14-8dc2-c8e7a71b3e6a", 00:11:16.651 "assigned_rate_limits": { 00:11:16.651 "rw_ios_per_sec": 0, 00:11:16.651 "rw_mbytes_per_sec": 0, 00:11:16.651 "r_mbytes_per_sec": 0, 00:11:16.651 "w_mbytes_per_sec": 0 00:11:16.651 }, 00:11:16.651 "claimed": true, 00:11:16.651 "claim_type": "exclusive_write", 00:11:16.651 "zoned": false, 00:11:16.651 "supported_io_types": { 00:11:16.651 "read": true, 00:11:16.651 "write": true, 00:11:16.651 "unmap": true, 00:11:16.651 "flush": true, 00:11:16.651 "reset": true, 00:11:16.651 "nvme_admin": false, 00:11:16.651 "nvme_io": false, 00:11:16.651 "nvme_io_md": false, 00:11:16.651 "write_zeroes": true, 00:11:16.651 "zcopy": true, 00:11:16.651 "get_zone_info": false, 00:11:16.651 "zone_management": false, 00:11:16.651 "zone_append": false, 00:11:16.651 "compare": false, 00:11:16.651 "compare_and_write": false, 00:11:16.651 "abort": true, 00:11:16.651 "seek_hole": false, 00:11:16.651 "seek_data": false, 00:11:16.651 "copy": true, 00:11:16.651 "nvme_iov_md": false 00:11:16.651 }, 00:11:16.651 "memory_domains": [ 00:11:16.651 { 00:11:16.651 "dma_device_id": "system", 00:11:16.651 "dma_device_type": 1 00:11:16.651 }, 00:11:16.651 { 00:11:16.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.651 "dma_device_type": 2 00:11:16.651 } 00:11:16.651 ], 00:11:16.651 "driver_specific": {} 00:11:16.651 } 00:11:16.651 ] 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.651 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.652 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.652 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.652 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.652 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.652 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.652 "name": "Existed_Raid", 00:11:16.652 "uuid": "58fc13dc-68ce-4b38-8789-db851a5c7835", 00:11:16.652 "strip_size_kb": 64, 00:11:16.652 "state": "configuring", 00:11:16.652 "raid_level": "concat", 00:11:16.652 "superblock": true, 00:11:16.652 "num_base_bdevs": 4, 00:11:16.652 "num_base_bdevs_discovered": 1, 00:11:16.652 "num_base_bdevs_operational": 4, 00:11:16.652 "base_bdevs_list": [ 00:11:16.652 { 00:11:16.652 "name": "BaseBdev1", 00:11:16.652 "uuid": "3938e4f9-fe83-4f14-8dc2-c8e7a71b3e6a", 00:11:16.652 "is_configured": true, 00:11:16.652 "data_offset": 2048, 00:11:16.652 "data_size": 63488 00:11:16.652 }, 00:11:16.652 { 00:11:16.652 "name": "BaseBdev2", 00:11:16.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.652 "is_configured": false, 00:11:16.652 "data_offset": 0, 00:11:16.652 "data_size": 0 00:11:16.652 }, 00:11:16.652 { 00:11:16.652 "name": "BaseBdev3", 00:11:16.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.652 "is_configured": false, 00:11:16.652 "data_offset": 0, 00:11:16.652 "data_size": 0 00:11:16.652 }, 00:11:16.652 { 00:11:16.652 "name": "BaseBdev4", 00:11:16.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.652 "is_configured": false, 00:11:16.652 "data_offset": 0, 00:11:16.652 "data_size": 0 00:11:16.652 } 00:11:16.652 ] 00:11:16.652 }' 00:11:16.652 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.652 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.912 [2024-12-14 12:37:16.572320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:16.912 [2024-12-14 12:37:16.572378] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.912 [2024-12-14 12:37:16.584353] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.912 [2024-12-14 12:37:16.586198] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:16.912 [2024-12-14 12:37:16.586290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:16.912 [2024-12-14 12:37:16.586319] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:16.912 [2024-12-14 12:37:16.586346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:16.912 [2024-12-14 12:37:16.586366] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:16.912 [2024-12-14 12:37:16.586387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.912 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.912 "name": "Existed_Raid", 00:11:16.912 "uuid": "d3f7be8e-fe11-4ff2-b534-db0b1d872a91", 00:11:16.912 "strip_size_kb": 64, 00:11:16.912 "state": "configuring", 00:11:16.912 "raid_level": "concat", 00:11:16.912 "superblock": true, 00:11:16.912 "num_base_bdevs": 4, 00:11:16.912 "num_base_bdevs_discovered": 1, 00:11:16.912 "num_base_bdevs_operational": 4, 00:11:16.912 "base_bdevs_list": [ 00:11:16.912 { 00:11:16.912 "name": "BaseBdev1", 00:11:16.912 "uuid": "3938e4f9-fe83-4f14-8dc2-c8e7a71b3e6a", 00:11:16.912 "is_configured": true, 00:11:16.912 "data_offset": 2048, 00:11:16.912 "data_size": 63488 00:11:16.912 }, 00:11:16.912 { 00:11:16.912 "name": "BaseBdev2", 00:11:16.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.912 "is_configured": false, 00:11:16.912 "data_offset": 0, 00:11:16.912 "data_size": 0 00:11:16.913 }, 00:11:16.913 { 00:11:16.913 "name": "BaseBdev3", 00:11:16.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.913 "is_configured": false, 00:11:16.913 "data_offset": 0, 00:11:16.913 "data_size": 0 00:11:16.913 }, 00:11:16.913 { 00:11:16.913 "name": "BaseBdev4", 00:11:16.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.913 "is_configured": false, 00:11:16.913 "data_offset": 0, 00:11:16.913 "data_size": 0 00:11:16.913 } 00:11:16.913 ] 00:11:16.913 }' 00:11:16.913 12:37:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.913 12:37:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.482 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.482 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.482 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.482 [2024-12-14 12:37:17.067178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.482 BaseBdev2 00:11:17.482 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.482 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:17.482 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.483 [ 00:11:17.483 { 00:11:17.483 "name": "BaseBdev2", 00:11:17.483 "aliases": [ 00:11:17.483 "e78d6a76-8b9a-4005-bc3d-643117e34786" 00:11:17.483 ], 00:11:17.483 "product_name": "Malloc disk", 00:11:17.483 "block_size": 512, 00:11:17.483 "num_blocks": 65536, 00:11:17.483 "uuid": "e78d6a76-8b9a-4005-bc3d-643117e34786", 00:11:17.483 "assigned_rate_limits": { 00:11:17.483 "rw_ios_per_sec": 0, 00:11:17.483 "rw_mbytes_per_sec": 0, 00:11:17.483 "r_mbytes_per_sec": 0, 00:11:17.483 "w_mbytes_per_sec": 0 00:11:17.483 }, 00:11:17.483 "claimed": true, 00:11:17.483 "claim_type": "exclusive_write", 00:11:17.483 "zoned": false, 00:11:17.483 "supported_io_types": { 00:11:17.483 "read": true, 00:11:17.483 "write": true, 00:11:17.483 "unmap": true, 00:11:17.483 "flush": true, 00:11:17.483 "reset": true, 00:11:17.483 "nvme_admin": false, 00:11:17.483 "nvme_io": false, 00:11:17.483 "nvme_io_md": false, 00:11:17.483 "write_zeroes": true, 00:11:17.483 "zcopy": true, 00:11:17.483 "get_zone_info": false, 00:11:17.483 "zone_management": false, 00:11:17.483 "zone_append": false, 00:11:17.483 "compare": false, 00:11:17.483 "compare_and_write": false, 00:11:17.483 "abort": true, 00:11:17.483 "seek_hole": false, 00:11:17.483 "seek_data": false, 00:11:17.483 "copy": true, 00:11:17.483 "nvme_iov_md": false 00:11:17.483 }, 00:11:17.483 "memory_domains": [ 00:11:17.483 { 00:11:17.483 "dma_device_id": "system", 00:11:17.483 "dma_device_type": 1 00:11:17.483 }, 00:11:17.483 { 00:11:17.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.483 "dma_device_type": 2 00:11:17.483 } 00:11:17.483 ], 00:11:17.483 "driver_specific": {} 00:11:17.483 } 00:11:17.483 ] 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.483 "name": "Existed_Raid", 00:11:17.483 "uuid": "d3f7be8e-fe11-4ff2-b534-db0b1d872a91", 00:11:17.483 "strip_size_kb": 64, 00:11:17.483 "state": "configuring", 00:11:17.483 "raid_level": "concat", 00:11:17.483 "superblock": true, 00:11:17.483 "num_base_bdevs": 4, 00:11:17.483 "num_base_bdevs_discovered": 2, 00:11:17.483 "num_base_bdevs_operational": 4, 00:11:17.483 "base_bdevs_list": [ 00:11:17.483 { 00:11:17.483 "name": "BaseBdev1", 00:11:17.483 "uuid": "3938e4f9-fe83-4f14-8dc2-c8e7a71b3e6a", 00:11:17.483 "is_configured": true, 00:11:17.483 "data_offset": 2048, 00:11:17.483 "data_size": 63488 00:11:17.483 }, 00:11:17.483 { 00:11:17.483 "name": "BaseBdev2", 00:11:17.483 "uuid": "e78d6a76-8b9a-4005-bc3d-643117e34786", 00:11:17.483 "is_configured": true, 00:11:17.483 "data_offset": 2048, 00:11:17.483 "data_size": 63488 00:11:17.483 }, 00:11:17.483 { 00:11:17.483 "name": "BaseBdev3", 00:11:17.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.483 "is_configured": false, 00:11:17.483 "data_offset": 0, 00:11:17.483 "data_size": 0 00:11:17.483 }, 00:11:17.483 { 00:11:17.483 "name": "BaseBdev4", 00:11:17.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.483 "is_configured": false, 00:11:17.483 "data_offset": 0, 00:11:17.483 "data_size": 0 00:11:17.483 } 00:11:17.483 ] 00:11:17.483 }' 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.483 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.051 [2024-12-14 12:37:17.540595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.051 BaseBdev3 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.051 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.052 [ 00:11:18.052 { 00:11:18.052 "name": "BaseBdev3", 00:11:18.052 "aliases": [ 00:11:18.052 "45830d72-06df-4ed4-9f81-d46c815489fa" 00:11:18.052 ], 00:11:18.052 "product_name": "Malloc disk", 00:11:18.052 "block_size": 512, 00:11:18.052 "num_blocks": 65536, 00:11:18.052 "uuid": "45830d72-06df-4ed4-9f81-d46c815489fa", 00:11:18.052 "assigned_rate_limits": { 00:11:18.052 "rw_ios_per_sec": 0, 00:11:18.052 "rw_mbytes_per_sec": 0, 00:11:18.052 "r_mbytes_per_sec": 0, 00:11:18.052 "w_mbytes_per_sec": 0 00:11:18.052 }, 00:11:18.052 "claimed": true, 00:11:18.052 "claim_type": "exclusive_write", 00:11:18.052 "zoned": false, 00:11:18.052 "supported_io_types": { 00:11:18.052 "read": true, 00:11:18.052 "write": true, 00:11:18.052 "unmap": true, 00:11:18.052 "flush": true, 00:11:18.052 "reset": true, 00:11:18.052 "nvme_admin": false, 00:11:18.052 "nvme_io": false, 00:11:18.052 "nvme_io_md": false, 00:11:18.052 "write_zeroes": true, 00:11:18.052 "zcopy": true, 00:11:18.052 "get_zone_info": false, 00:11:18.052 "zone_management": false, 00:11:18.052 "zone_append": false, 00:11:18.052 "compare": false, 00:11:18.052 "compare_and_write": false, 00:11:18.052 "abort": true, 00:11:18.052 "seek_hole": false, 00:11:18.052 "seek_data": false, 00:11:18.052 "copy": true, 00:11:18.052 "nvme_iov_md": false 00:11:18.052 }, 00:11:18.052 "memory_domains": [ 00:11:18.052 { 00:11:18.052 "dma_device_id": "system", 00:11:18.052 "dma_device_type": 1 00:11:18.052 }, 00:11:18.052 { 00:11:18.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.052 "dma_device_type": 2 00:11:18.052 } 00:11:18.052 ], 00:11:18.052 "driver_specific": {} 00:11:18.052 } 00:11:18.052 ] 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.052 "name": "Existed_Raid", 00:11:18.052 "uuid": "d3f7be8e-fe11-4ff2-b534-db0b1d872a91", 00:11:18.052 "strip_size_kb": 64, 00:11:18.052 "state": "configuring", 00:11:18.052 "raid_level": "concat", 00:11:18.052 "superblock": true, 00:11:18.052 "num_base_bdevs": 4, 00:11:18.052 "num_base_bdevs_discovered": 3, 00:11:18.052 "num_base_bdevs_operational": 4, 00:11:18.052 "base_bdevs_list": [ 00:11:18.052 { 00:11:18.052 "name": "BaseBdev1", 00:11:18.052 "uuid": "3938e4f9-fe83-4f14-8dc2-c8e7a71b3e6a", 00:11:18.052 "is_configured": true, 00:11:18.052 "data_offset": 2048, 00:11:18.052 "data_size": 63488 00:11:18.052 }, 00:11:18.052 { 00:11:18.052 "name": "BaseBdev2", 00:11:18.052 "uuid": "e78d6a76-8b9a-4005-bc3d-643117e34786", 00:11:18.052 "is_configured": true, 00:11:18.052 "data_offset": 2048, 00:11:18.052 "data_size": 63488 00:11:18.052 }, 00:11:18.052 { 00:11:18.052 "name": "BaseBdev3", 00:11:18.052 "uuid": "45830d72-06df-4ed4-9f81-d46c815489fa", 00:11:18.052 "is_configured": true, 00:11:18.052 "data_offset": 2048, 00:11:18.052 "data_size": 63488 00:11:18.052 }, 00:11:18.052 { 00:11:18.052 "name": "BaseBdev4", 00:11:18.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.052 "is_configured": false, 00:11:18.052 "data_offset": 0, 00:11:18.052 "data_size": 0 00:11:18.052 } 00:11:18.052 ] 00:11:18.052 }' 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.052 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 12:37:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.312 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.312 12:37:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 [2024-12-14 12:37:18.010193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.312 [2024-12-14 12:37:18.010479] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:18.312 [2024-12-14 12:37:18.010496] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:18.312 [2024-12-14 12:37:18.010762] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:18.312 [2024-12-14 12:37:18.010917] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:18.312 [2024-12-14 12:37:18.010929] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:18.312 BaseBdev4 00:11:18.312 [2024-12-14 12:37:18.011081] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.312 [ 00:11:18.312 { 00:11:18.312 "name": "BaseBdev4", 00:11:18.312 "aliases": [ 00:11:18.312 "21dfd694-8466-4d93-9d2a-cfb64cbf7dd5" 00:11:18.312 ], 00:11:18.312 "product_name": "Malloc disk", 00:11:18.312 "block_size": 512, 00:11:18.312 "num_blocks": 65536, 00:11:18.312 "uuid": "21dfd694-8466-4d93-9d2a-cfb64cbf7dd5", 00:11:18.312 "assigned_rate_limits": { 00:11:18.312 "rw_ios_per_sec": 0, 00:11:18.312 "rw_mbytes_per_sec": 0, 00:11:18.312 "r_mbytes_per_sec": 0, 00:11:18.312 "w_mbytes_per_sec": 0 00:11:18.312 }, 00:11:18.312 "claimed": true, 00:11:18.312 "claim_type": "exclusive_write", 00:11:18.312 "zoned": false, 00:11:18.312 "supported_io_types": { 00:11:18.312 "read": true, 00:11:18.312 "write": true, 00:11:18.312 "unmap": true, 00:11:18.312 "flush": true, 00:11:18.312 "reset": true, 00:11:18.312 "nvme_admin": false, 00:11:18.312 "nvme_io": false, 00:11:18.312 "nvme_io_md": false, 00:11:18.312 "write_zeroes": true, 00:11:18.312 "zcopy": true, 00:11:18.312 "get_zone_info": false, 00:11:18.312 "zone_management": false, 00:11:18.312 "zone_append": false, 00:11:18.312 "compare": false, 00:11:18.312 "compare_and_write": false, 00:11:18.312 "abort": true, 00:11:18.312 "seek_hole": false, 00:11:18.312 "seek_data": false, 00:11:18.312 "copy": true, 00:11:18.312 "nvme_iov_md": false 00:11:18.312 }, 00:11:18.312 "memory_domains": [ 00:11:18.312 { 00:11:18.312 "dma_device_id": "system", 00:11:18.312 "dma_device_type": 1 00:11:18.312 }, 00:11:18.312 { 00:11:18.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.312 "dma_device_type": 2 00:11:18.312 } 00:11:18.312 ], 00:11:18.312 "driver_specific": {} 00:11:18.312 } 00:11:18.312 ] 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.312 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.313 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.313 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.313 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.313 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.313 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.313 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.572 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.572 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.572 "name": "Existed_Raid", 00:11:18.572 "uuid": "d3f7be8e-fe11-4ff2-b534-db0b1d872a91", 00:11:18.572 "strip_size_kb": 64, 00:11:18.572 "state": "online", 00:11:18.572 "raid_level": "concat", 00:11:18.572 "superblock": true, 00:11:18.572 "num_base_bdevs": 4, 00:11:18.572 "num_base_bdevs_discovered": 4, 00:11:18.572 "num_base_bdevs_operational": 4, 00:11:18.572 "base_bdevs_list": [ 00:11:18.572 { 00:11:18.572 "name": "BaseBdev1", 00:11:18.572 "uuid": "3938e4f9-fe83-4f14-8dc2-c8e7a71b3e6a", 00:11:18.572 "is_configured": true, 00:11:18.572 "data_offset": 2048, 00:11:18.572 "data_size": 63488 00:11:18.572 }, 00:11:18.572 { 00:11:18.572 "name": "BaseBdev2", 00:11:18.572 "uuid": "e78d6a76-8b9a-4005-bc3d-643117e34786", 00:11:18.572 "is_configured": true, 00:11:18.572 "data_offset": 2048, 00:11:18.572 "data_size": 63488 00:11:18.572 }, 00:11:18.572 { 00:11:18.572 "name": "BaseBdev3", 00:11:18.572 "uuid": "45830d72-06df-4ed4-9f81-d46c815489fa", 00:11:18.572 "is_configured": true, 00:11:18.572 "data_offset": 2048, 00:11:18.572 "data_size": 63488 00:11:18.572 }, 00:11:18.572 { 00:11:18.572 "name": "BaseBdev4", 00:11:18.572 "uuid": "21dfd694-8466-4d93-9d2a-cfb64cbf7dd5", 00:11:18.572 "is_configured": true, 00:11:18.572 "data_offset": 2048, 00:11:18.572 "data_size": 63488 00:11:18.572 } 00:11:18.572 ] 00:11:18.572 }' 00:11:18.572 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.572 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.832 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:18.832 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:18.832 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.833 [2024-12-14 12:37:18.461807] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:18.833 "name": "Existed_Raid", 00:11:18.833 "aliases": [ 00:11:18.833 "d3f7be8e-fe11-4ff2-b534-db0b1d872a91" 00:11:18.833 ], 00:11:18.833 "product_name": "Raid Volume", 00:11:18.833 "block_size": 512, 00:11:18.833 "num_blocks": 253952, 00:11:18.833 "uuid": "d3f7be8e-fe11-4ff2-b534-db0b1d872a91", 00:11:18.833 "assigned_rate_limits": { 00:11:18.833 "rw_ios_per_sec": 0, 00:11:18.833 "rw_mbytes_per_sec": 0, 00:11:18.833 "r_mbytes_per_sec": 0, 00:11:18.833 "w_mbytes_per_sec": 0 00:11:18.833 }, 00:11:18.833 "claimed": false, 00:11:18.833 "zoned": false, 00:11:18.833 "supported_io_types": { 00:11:18.833 "read": true, 00:11:18.833 "write": true, 00:11:18.833 "unmap": true, 00:11:18.833 "flush": true, 00:11:18.833 "reset": true, 00:11:18.833 "nvme_admin": false, 00:11:18.833 "nvme_io": false, 00:11:18.833 "nvme_io_md": false, 00:11:18.833 "write_zeroes": true, 00:11:18.833 "zcopy": false, 00:11:18.833 "get_zone_info": false, 00:11:18.833 "zone_management": false, 00:11:18.833 "zone_append": false, 00:11:18.833 "compare": false, 00:11:18.833 "compare_and_write": false, 00:11:18.833 "abort": false, 00:11:18.833 "seek_hole": false, 00:11:18.833 "seek_data": false, 00:11:18.833 "copy": false, 00:11:18.833 "nvme_iov_md": false 00:11:18.833 }, 00:11:18.833 "memory_domains": [ 00:11:18.833 { 00:11:18.833 "dma_device_id": "system", 00:11:18.833 "dma_device_type": 1 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.833 "dma_device_type": 2 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "dma_device_id": "system", 00:11:18.833 "dma_device_type": 1 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.833 "dma_device_type": 2 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "dma_device_id": "system", 00:11:18.833 "dma_device_type": 1 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.833 "dma_device_type": 2 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "dma_device_id": "system", 00:11:18.833 "dma_device_type": 1 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.833 "dma_device_type": 2 00:11:18.833 } 00:11:18.833 ], 00:11:18.833 "driver_specific": { 00:11:18.833 "raid": { 00:11:18.833 "uuid": "d3f7be8e-fe11-4ff2-b534-db0b1d872a91", 00:11:18.833 "strip_size_kb": 64, 00:11:18.833 "state": "online", 00:11:18.833 "raid_level": "concat", 00:11:18.833 "superblock": true, 00:11:18.833 "num_base_bdevs": 4, 00:11:18.833 "num_base_bdevs_discovered": 4, 00:11:18.833 "num_base_bdevs_operational": 4, 00:11:18.833 "base_bdevs_list": [ 00:11:18.833 { 00:11:18.833 "name": "BaseBdev1", 00:11:18.833 "uuid": "3938e4f9-fe83-4f14-8dc2-c8e7a71b3e6a", 00:11:18.833 "is_configured": true, 00:11:18.833 "data_offset": 2048, 00:11:18.833 "data_size": 63488 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "name": "BaseBdev2", 00:11:18.833 "uuid": "e78d6a76-8b9a-4005-bc3d-643117e34786", 00:11:18.833 "is_configured": true, 00:11:18.833 "data_offset": 2048, 00:11:18.833 "data_size": 63488 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "name": "BaseBdev3", 00:11:18.833 "uuid": "45830d72-06df-4ed4-9f81-d46c815489fa", 00:11:18.833 "is_configured": true, 00:11:18.833 "data_offset": 2048, 00:11:18.833 "data_size": 63488 00:11:18.833 }, 00:11:18.833 { 00:11:18.833 "name": "BaseBdev4", 00:11:18.833 "uuid": "21dfd694-8466-4d93-9d2a-cfb64cbf7dd5", 00:11:18.833 "is_configured": true, 00:11:18.833 "data_offset": 2048, 00:11:18.833 "data_size": 63488 00:11:18.833 } 00:11:18.833 ] 00:11:18.833 } 00:11:18.833 } 00:11:18.833 }' 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:18.833 BaseBdev2 00:11:18.833 BaseBdev3 00:11:18.833 BaseBdev4' 00:11:18.833 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.093 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.093 [2024-12-14 12:37:18.753047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.093 [2024-12-14 12:37:18.753104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.093 [2024-12-14 12:37:18.753155] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.354 "name": "Existed_Raid", 00:11:19.354 "uuid": "d3f7be8e-fe11-4ff2-b534-db0b1d872a91", 00:11:19.354 "strip_size_kb": 64, 00:11:19.354 "state": "offline", 00:11:19.354 "raid_level": "concat", 00:11:19.354 "superblock": true, 00:11:19.354 "num_base_bdevs": 4, 00:11:19.354 "num_base_bdevs_discovered": 3, 00:11:19.354 "num_base_bdevs_operational": 3, 00:11:19.354 "base_bdevs_list": [ 00:11:19.354 { 00:11:19.354 "name": null, 00:11:19.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.354 "is_configured": false, 00:11:19.354 "data_offset": 0, 00:11:19.354 "data_size": 63488 00:11:19.354 }, 00:11:19.354 { 00:11:19.354 "name": "BaseBdev2", 00:11:19.354 "uuid": "e78d6a76-8b9a-4005-bc3d-643117e34786", 00:11:19.354 "is_configured": true, 00:11:19.354 "data_offset": 2048, 00:11:19.354 "data_size": 63488 00:11:19.354 }, 00:11:19.354 { 00:11:19.354 "name": "BaseBdev3", 00:11:19.354 "uuid": "45830d72-06df-4ed4-9f81-d46c815489fa", 00:11:19.354 "is_configured": true, 00:11:19.354 "data_offset": 2048, 00:11:19.354 "data_size": 63488 00:11:19.354 }, 00:11:19.354 { 00:11:19.354 "name": "BaseBdev4", 00:11:19.354 "uuid": "21dfd694-8466-4d93-9d2a-cfb64cbf7dd5", 00:11:19.354 "is_configured": true, 00:11:19.354 "data_offset": 2048, 00:11:19.354 "data_size": 63488 00:11:19.354 } 00:11:19.354 ] 00:11:19.354 }' 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.354 12:37:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.614 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.614 [2024-12-14 12:37:19.312923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:19.873 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.873 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.874 [2024-12-14 12:37:19.466060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.874 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.133 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:20.133 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.134 [2024-12-14 12:37:19.618373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:20.134 [2024-12-14 12:37:19.618428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.134 BaseBdev2 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.134 [ 00:11:20.134 { 00:11:20.134 "name": "BaseBdev2", 00:11:20.134 "aliases": [ 00:11:20.134 "313f4391-5ce4-488b-bfb9-2c8589cf27a5" 00:11:20.134 ], 00:11:20.134 "product_name": "Malloc disk", 00:11:20.134 "block_size": 512, 00:11:20.134 "num_blocks": 65536, 00:11:20.134 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:20.134 "assigned_rate_limits": { 00:11:20.134 "rw_ios_per_sec": 0, 00:11:20.134 "rw_mbytes_per_sec": 0, 00:11:20.134 "r_mbytes_per_sec": 0, 00:11:20.134 "w_mbytes_per_sec": 0 00:11:20.134 }, 00:11:20.134 "claimed": false, 00:11:20.134 "zoned": false, 00:11:20.134 "supported_io_types": { 00:11:20.134 "read": true, 00:11:20.134 "write": true, 00:11:20.134 "unmap": true, 00:11:20.134 "flush": true, 00:11:20.134 "reset": true, 00:11:20.134 "nvme_admin": false, 00:11:20.134 "nvme_io": false, 00:11:20.134 "nvme_io_md": false, 00:11:20.134 "write_zeroes": true, 00:11:20.134 "zcopy": true, 00:11:20.134 "get_zone_info": false, 00:11:20.134 "zone_management": false, 00:11:20.134 "zone_append": false, 00:11:20.134 "compare": false, 00:11:20.134 "compare_and_write": false, 00:11:20.134 "abort": true, 00:11:20.134 "seek_hole": false, 00:11:20.134 "seek_data": false, 00:11:20.134 "copy": true, 00:11:20.134 "nvme_iov_md": false 00:11:20.134 }, 00:11:20.134 "memory_domains": [ 00:11:20.134 { 00:11:20.134 "dma_device_id": "system", 00:11:20.134 "dma_device_type": 1 00:11:20.134 }, 00:11:20.134 { 00:11:20.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.134 "dma_device_type": 2 00:11:20.134 } 00:11:20.134 ], 00:11:20.134 "driver_specific": {} 00:11:20.134 } 00:11:20.134 ] 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.134 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.393 BaseBdev3 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.393 [ 00:11:20.393 { 00:11:20.393 "name": "BaseBdev3", 00:11:20.393 "aliases": [ 00:11:20.393 "55c2274e-4d59-4ad6-8d6b-a9e5397db34d" 00:11:20.393 ], 00:11:20.393 "product_name": "Malloc disk", 00:11:20.393 "block_size": 512, 00:11:20.393 "num_blocks": 65536, 00:11:20.393 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:20.393 "assigned_rate_limits": { 00:11:20.393 "rw_ios_per_sec": 0, 00:11:20.393 "rw_mbytes_per_sec": 0, 00:11:20.393 "r_mbytes_per_sec": 0, 00:11:20.393 "w_mbytes_per_sec": 0 00:11:20.393 }, 00:11:20.393 "claimed": false, 00:11:20.393 "zoned": false, 00:11:20.393 "supported_io_types": { 00:11:20.393 "read": true, 00:11:20.393 "write": true, 00:11:20.393 "unmap": true, 00:11:20.393 "flush": true, 00:11:20.393 "reset": true, 00:11:20.393 "nvme_admin": false, 00:11:20.393 "nvme_io": false, 00:11:20.393 "nvme_io_md": false, 00:11:20.393 "write_zeroes": true, 00:11:20.393 "zcopy": true, 00:11:20.393 "get_zone_info": false, 00:11:20.393 "zone_management": false, 00:11:20.393 "zone_append": false, 00:11:20.393 "compare": false, 00:11:20.393 "compare_and_write": false, 00:11:20.393 "abort": true, 00:11:20.393 "seek_hole": false, 00:11:20.393 "seek_data": false, 00:11:20.393 "copy": true, 00:11:20.393 "nvme_iov_md": false 00:11:20.393 }, 00:11:20.393 "memory_domains": [ 00:11:20.393 { 00:11:20.393 "dma_device_id": "system", 00:11:20.393 "dma_device_type": 1 00:11:20.393 }, 00:11:20.393 { 00:11:20.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.393 "dma_device_type": 2 00:11:20.393 } 00:11:20.393 ], 00:11:20.393 "driver_specific": {} 00:11:20.393 } 00:11:20.393 ] 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.393 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.394 BaseBdev4 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.394 12:37:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.394 [ 00:11:20.394 { 00:11:20.394 "name": "BaseBdev4", 00:11:20.394 "aliases": [ 00:11:20.394 "19c7877b-7fbd-4b48-acc1-06a8679be9c4" 00:11:20.394 ], 00:11:20.394 "product_name": "Malloc disk", 00:11:20.394 "block_size": 512, 00:11:20.394 "num_blocks": 65536, 00:11:20.394 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:20.394 "assigned_rate_limits": { 00:11:20.394 "rw_ios_per_sec": 0, 00:11:20.394 "rw_mbytes_per_sec": 0, 00:11:20.394 "r_mbytes_per_sec": 0, 00:11:20.394 "w_mbytes_per_sec": 0 00:11:20.394 }, 00:11:20.394 "claimed": false, 00:11:20.394 "zoned": false, 00:11:20.394 "supported_io_types": { 00:11:20.394 "read": true, 00:11:20.394 "write": true, 00:11:20.394 "unmap": true, 00:11:20.394 "flush": true, 00:11:20.394 "reset": true, 00:11:20.394 "nvme_admin": false, 00:11:20.394 "nvme_io": false, 00:11:20.394 "nvme_io_md": false, 00:11:20.394 "write_zeroes": true, 00:11:20.394 "zcopy": true, 00:11:20.394 "get_zone_info": false, 00:11:20.394 "zone_management": false, 00:11:20.394 "zone_append": false, 00:11:20.394 "compare": false, 00:11:20.394 "compare_and_write": false, 00:11:20.394 "abort": true, 00:11:20.394 "seek_hole": false, 00:11:20.394 "seek_data": false, 00:11:20.394 "copy": true, 00:11:20.394 "nvme_iov_md": false 00:11:20.394 }, 00:11:20.394 "memory_domains": [ 00:11:20.394 { 00:11:20.394 "dma_device_id": "system", 00:11:20.394 "dma_device_type": 1 00:11:20.394 }, 00:11:20.394 { 00:11:20.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.394 "dma_device_type": 2 00:11:20.394 } 00:11:20.394 ], 00:11:20.394 "driver_specific": {} 00:11:20.394 } 00:11:20.394 ] 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.394 [2024-12-14 12:37:20.012494] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:20.394 [2024-12-14 12:37:20.012589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:20.394 [2024-12-14 12:37:20.012646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.394 [2024-12-14 12:37:20.014671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.394 [2024-12-14 12:37:20.014774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.394 "name": "Existed_Raid", 00:11:20.394 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:20.394 "strip_size_kb": 64, 00:11:20.394 "state": "configuring", 00:11:20.394 "raid_level": "concat", 00:11:20.394 "superblock": true, 00:11:20.394 "num_base_bdevs": 4, 00:11:20.394 "num_base_bdevs_discovered": 3, 00:11:20.394 "num_base_bdevs_operational": 4, 00:11:20.394 "base_bdevs_list": [ 00:11:20.394 { 00:11:20.394 "name": "BaseBdev1", 00:11:20.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.394 "is_configured": false, 00:11:20.394 "data_offset": 0, 00:11:20.394 "data_size": 0 00:11:20.394 }, 00:11:20.394 { 00:11:20.394 "name": "BaseBdev2", 00:11:20.394 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:20.394 "is_configured": true, 00:11:20.394 "data_offset": 2048, 00:11:20.394 "data_size": 63488 00:11:20.394 }, 00:11:20.394 { 00:11:20.394 "name": "BaseBdev3", 00:11:20.394 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:20.394 "is_configured": true, 00:11:20.394 "data_offset": 2048, 00:11:20.394 "data_size": 63488 00:11:20.394 }, 00:11:20.394 { 00:11:20.394 "name": "BaseBdev4", 00:11:20.394 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:20.394 "is_configured": true, 00:11:20.394 "data_offset": 2048, 00:11:20.394 "data_size": 63488 00:11:20.394 } 00:11:20.394 ] 00:11:20.394 }' 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.394 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.964 [2024-12-14 12:37:20.467681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.964 "name": "Existed_Raid", 00:11:20.964 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:20.964 "strip_size_kb": 64, 00:11:20.964 "state": "configuring", 00:11:20.964 "raid_level": "concat", 00:11:20.964 "superblock": true, 00:11:20.964 "num_base_bdevs": 4, 00:11:20.964 "num_base_bdevs_discovered": 2, 00:11:20.964 "num_base_bdevs_operational": 4, 00:11:20.964 "base_bdevs_list": [ 00:11:20.964 { 00:11:20.964 "name": "BaseBdev1", 00:11:20.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.964 "is_configured": false, 00:11:20.964 "data_offset": 0, 00:11:20.964 "data_size": 0 00:11:20.964 }, 00:11:20.964 { 00:11:20.964 "name": null, 00:11:20.964 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:20.964 "is_configured": false, 00:11:20.964 "data_offset": 0, 00:11:20.964 "data_size": 63488 00:11:20.964 }, 00:11:20.964 { 00:11:20.964 "name": "BaseBdev3", 00:11:20.964 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:20.964 "is_configured": true, 00:11:20.964 "data_offset": 2048, 00:11:20.964 "data_size": 63488 00:11:20.964 }, 00:11:20.964 { 00:11:20.964 "name": "BaseBdev4", 00:11:20.964 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:20.964 "is_configured": true, 00:11:20.964 "data_offset": 2048, 00:11:20.964 "data_size": 63488 00:11:20.964 } 00:11:20.964 ] 00:11:20.964 }' 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.964 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.223 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.488 [2024-12-14 12:37:20.960436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:21.488 BaseBdev1 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.488 [ 00:11:21.488 { 00:11:21.488 "name": "BaseBdev1", 00:11:21.488 "aliases": [ 00:11:21.488 "54ce8ab4-3d70-4e88-887b-c488635b2386" 00:11:21.488 ], 00:11:21.488 "product_name": "Malloc disk", 00:11:21.488 "block_size": 512, 00:11:21.488 "num_blocks": 65536, 00:11:21.488 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:21.488 "assigned_rate_limits": { 00:11:21.488 "rw_ios_per_sec": 0, 00:11:21.488 "rw_mbytes_per_sec": 0, 00:11:21.488 "r_mbytes_per_sec": 0, 00:11:21.488 "w_mbytes_per_sec": 0 00:11:21.488 }, 00:11:21.488 "claimed": true, 00:11:21.488 "claim_type": "exclusive_write", 00:11:21.488 "zoned": false, 00:11:21.488 "supported_io_types": { 00:11:21.488 "read": true, 00:11:21.488 "write": true, 00:11:21.488 "unmap": true, 00:11:21.488 "flush": true, 00:11:21.488 "reset": true, 00:11:21.488 "nvme_admin": false, 00:11:21.488 "nvme_io": false, 00:11:21.488 "nvme_io_md": false, 00:11:21.488 "write_zeroes": true, 00:11:21.488 "zcopy": true, 00:11:21.488 "get_zone_info": false, 00:11:21.488 "zone_management": false, 00:11:21.488 "zone_append": false, 00:11:21.488 "compare": false, 00:11:21.488 "compare_and_write": false, 00:11:21.488 "abort": true, 00:11:21.488 "seek_hole": false, 00:11:21.488 "seek_data": false, 00:11:21.488 "copy": true, 00:11:21.488 "nvme_iov_md": false 00:11:21.488 }, 00:11:21.488 "memory_domains": [ 00:11:21.488 { 00:11:21.488 "dma_device_id": "system", 00:11:21.488 "dma_device_type": 1 00:11:21.488 }, 00:11:21.488 { 00:11:21.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.488 "dma_device_type": 2 00:11:21.488 } 00:11:21.488 ], 00:11:21.488 "driver_specific": {} 00:11:21.488 } 00:11:21.488 ] 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.488 12:37:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.488 "name": "Existed_Raid", 00:11:21.488 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:21.488 "strip_size_kb": 64, 00:11:21.488 "state": "configuring", 00:11:21.488 "raid_level": "concat", 00:11:21.488 "superblock": true, 00:11:21.488 "num_base_bdevs": 4, 00:11:21.488 "num_base_bdevs_discovered": 3, 00:11:21.488 "num_base_bdevs_operational": 4, 00:11:21.488 "base_bdevs_list": [ 00:11:21.488 { 00:11:21.488 "name": "BaseBdev1", 00:11:21.488 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:21.488 "is_configured": true, 00:11:21.488 "data_offset": 2048, 00:11:21.488 "data_size": 63488 00:11:21.488 }, 00:11:21.488 { 00:11:21.488 "name": null, 00:11:21.488 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:21.488 "is_configured": false, 00:11:21.488 "data_offset": 0, 00:11:21.488 "data_size": 63488 00:11:21.488 }, 00:11:21.488 { 00:11:21.488 "name": "BaseBdev3", 00:11:21.488 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:21.488 "is_configured": true, 00:11:21.488 "data_offset": 2048, 00:11:21.488 "data_size": 63488 00:11:21.488 }, 00:11:21.488 { 00:11:21.488 "name": "BaseBdev4", 00:11:21.488 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:21.488 "is_configured": true, 00:11:21.488 "data_offset": 2048, 00:11:21.488 "data_size": 63488 00:11:21.488 } 00:11:21.488 ] 00:11:21.488 }' 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.488 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.754 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.754 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.754 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.754 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.754 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.754 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:21.754 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:21.755 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.755 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.014 [2024-12-14 12:37:21.491611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.014 "name": "Existed_Raid", 00:11:22.014 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:22.014 "strip_size_kb": 64, 00:11:22.014 "state": "configuring", 00:11:22.014 "raid_level": "concat", 00:11:22.014 "superblock": true, 00:11:22.014 "num_base_bdevs": 4, 00:11:22.014 "num_base_bdevs_discovered": 2, 00:11:22.014 "num_base_bdevs_operational": 4, 00:11:22.014 "base_bdevs_list": [ 00:11:22.014 { 00:11:22.014 "name": "BaseBdev1", 00:11:22.014 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:22.014 "is_configured": true, 00:11:22.014 "data_offset": 2048, 00:11:22.014 "data_size": 63488 00:11:22.014 }, 00:11:22.014 { 00:11:22.014 "name": null, 00:11:22.014 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:22.014 "is_configured": false, 00:11:22.014 "data_offset": 0, 00:11:22.014 "data_size": 63488 00:11:22.014 }, 00:11:22.014 { 00:11:22.014 "name": null, 00:11:22.014 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:22.014 "is_configured": false, 00:11:22.014 "data_offset": 0, 00:11:22.014 "data_size": 63488 00:11:22.014 }, 00:11:22.014 { 00:11:22.014 "name": "BaseBdev4", 00:11:22.014 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:22.014 "is_configured": true, 00:11:22.014 "data_offset": 2048, 00:11:22.014 "data_size": 63488 00:11:22.014 } 00:11:22.014 ] 00:11:22.014 }' 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.014 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.273 [2024-12-14 12:37:21.962783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.273 12:37:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.273 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.273 "name": "Existed_Raid", 00:11:22.273 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:22.273 "strip_size_kb": 64, 00:11:22.273 "state": "configuring", 00:11:22.273 "raid_level": "concat", 00:11:22.273 "superblock": true, 00:11:22.273 "num_base_bdevs": 4, 00:11:22.273 "num_base_bdevs_discovered": 3, 00:11:22.273 "num_base_bdevs_operational": 4, 00:11:22.273 "base_bdevs_list": [ 00:11:22.273 { 00:11:22.273 "name": "BaseBdev1", 00:11:22.273 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:22.273 "is_configured": true, 00:11:22.273 "data_offset": 2048, 00:11:22.273 "data_size": 63488 00:11:22.273 }, 00:11:22.273 { 00:11:22.273 "name": null, 00:11:22.273 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:22.273 "is_configured": false, 00:11:22.273 "data_offset": 0, 00:11:22.273 "data_size": 63488 00:11:22.273 }, 00:11:22.273 { 00:11:22.273 "name": "BaseBdev3", 00:11:22.273 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:22.273 "is_configured": true, 00:11:22.273 "data_offset": 2048, 00:11:22.273 "data_size": 63488 00:11:22.273 }, 00:11:22.273 { 00:11:22.273 "name": "BaseBdev4", 00:11:22.273 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:22.273 "is_configured": true, 00:11:22.273 "data_offset": 2048, 00:11:22.273 "data_size": 63488 00:11:22.273 } 00:11:22.273 ] 00:11:22.273 }' 00:11:22.273 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.273 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.843 [2024-12-14 12:37:22.374280] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.843 "name": "Existed_Raid", 00:11:22.843 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:22.843 "strip_size_kb": 64, 00:11:22.843 "state": "configuring", 00:11:22.843 "raid_level": "concat", 00:11:22.843 "superblock": true, 00:11:22.843 "num_base_bdevs": 4, 00:11:22.843 "num_base_bdevs_discovered": 2, 00:11:22.843 "num_base_bdevs_operational": 4, 00:11:22.843 "base_bdevs_list": [ 00:11:22.843 { 00:11:22.843 "name": null, 00:11:22.843 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:22.843 "is_configured": false, 00:11:22.843 "data_offset": 0, 00:11:22.843 "data_size": 63488 00:11:22.843 }, 00:11:22.843 { 00:11:22.843 "name": null, 00:11:22.843 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:22.843 "is_configured": false, 00:11:22.843 "data_offset": 0, 00:11:22.843 "data_size": 63488 00:11:22.843 }, 00:11:22.843 { 00:11:22.843 "name": "BaseBdev3", 00:11:22.843 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:22.843 "is_configured": true, 00:11:22.843 "data_offset": 2048, 00:11:22.843 "data_size": 63488 00:11:22.843 }, 00:11:22.843 { 00:11:22.843 "name": "BaseBdev4", 00:11:22.843 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:22.843 "is_configured": true, 00:11:22.843 "data_offset": 2048, 00:11:22.843 "data_size": 63488 00:11:22.843 } 00:11:22.843 ] 00:11:22.843 }' 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.843 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.412 [2024-12-14 12:37:22.937550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.412 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.412 "name": "Existed_Raid", 00:11:23.412 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:23.412 "strip_size_kb": 64, 00:11:23.412 "state": "configuring", 00:11:23.412 "raid_level": "concat", 00:11:23.412 "superblock": true, 00:11:23.412 "num_base_bdevs": 4, 00:11:23.412 "num_base_bdevs_discovered": 3, 00:11:23.412 "num_base_bdevs_operational": 4, 00:11:23.412 "base_bdevs_list": [ 00:11:23.412 { 00:11:23.412 "name": null, 00:11:23.412 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:23.413 "is_configured": false, 00:11:23.413 "data_offset": 0, 00:11:23.413 "data_size": 63488 00:11:23.413 }, 00:11:23.413 { 00:11:23.413 "name": "BaseBdev2", 00:11:23.413 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:23.413 "is_configured": true, 00:11:23.413 "data_offset": 2048, 00:11:23.413 "data_size": 63488 00:11:23.413 }, 00:11:23.413 { 00:11:23.413 "name": "BaseBdev3", 00:11:23.413 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:23.413 "is_configured": true, 00:11:23.413 "data_offset": 2048, 00:11:23.413 "data_size": 63488 00:11:23.413 }, 00:11:23.413 { 00:11:23.413 "name": "BaseBdev4", 00:11:23.413 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:23.413 "is_configured": true, 00:11:23.413 "data_offset": 2048, 00:11:23.413 "data_size": 63488 00:11:23.413 } 00:11:23.413 ] 00:11:23.413 }' 00:11:23.413 12:37:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.413 12:37:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.672 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.673 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 54ce8ab4-3d70-4e88-887b-c488635b2386 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.933 [2024-12-14 12:37:23.488556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:23.933 [2024-12-14 12:37:23.488786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:23.933 [2024-12-14 12:37:23.488800] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:23.933 [2024-12-14 12:37:23.489048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:23.933 NewBaseBdev 00:11:23.933 [2024-12-14 12:37:23.489231] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:23.933 [2024-12-14 12:37:23.489248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:23.933 [2024-12-14 12:37:23.489392] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.933 [ 00:11:23.933 { 00:11:23.933 "name": "NewBaseBdev", 00:11:23.933 "aliases": [ 00:11:23.933 "54ce8ab4-3d70-4e88-887b-c488635b2386" 00:11:23.933 ], 00:11:23.933 "product_name": "Malloc disk", 00:11:23.933 "block_size": 512, 00:11:23.933 "num_blocks": 65536, 00:11:23.933 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:23.933 "assigned_rate_limits": { 00:11:23.933 "rw_ios_per_sec": 0, 00:11:23.933 "rw_mbytes_per_sec": 0, 00:11:23.933 "r_mbytes_per_sec": 0, 00:11:23.933 "w_mbytes_per_sec": 0 00:11:23.933 }, 00:11:23.933 "claimed": true, 00:11:23.933 "claim_type": "exclusive_write", 00:11:23.933 "zoned": false, 00:11:23.933 "supported_io_types": { 00:11:23.933 "read": true, 00:11:23.933 "write": true, 00:11:23.933 "unmap": true, 00:11:23.933 "flush": true, 00:11:23.933 "reset": true, 00:11:23.933 "nvme_admin": false, 00:11:23.933 "nvme_io": false, 00:11:23.933 "nvme_io_md": false, 00:11:23.933 "write_zeroes": true, 00:11:23.933 "zcopy": true, 00:11:23.933 "get_zone_info": false, 00:11:23.933 "zone_management": false, 00:11:23.933 "zone_append": false, 00:11:23.933 "compare": false, 00:11:23.933 "compare_and_write": false, 00:11:23.933 "abort": true, 00:11:23.933 "seek_hole": false, 00:11:23.933 "seek_data": false, 00:11:23.933 "copy": true, 00:11:23.933 "nvme_iov_md": false 00:11:23.933 }, 00:11:23.933 "memory_domains": [ 00:11:23.933 { 00:11:23.933 "dma_device_id": "system", 00:11:23.933 "dma_device_type": 1 00:11:23.933 }, 00:11:23.933 { 00:11:23.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.933 "dma_device_type": 2 00:11:23.933 } 00:11:23.933 ], 00:11:23.933 "driver_specific": {} 00:11:23.933 } 00:11:23.933 ] 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.933 "name": "Existed_Raid", 00:11:23.933 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:23.933 "strip_size_kb": 64, 00:11:23.933 "state": "online", 00:11:23.933 "raid_level": "concat", 00:11:23.933 "superblock": true, 00:11:23.933 "num_base_bdevs": 4, 00:11:23.933 "num_base_bdevs_discovered": 4, 00:11:23.933 "num_base_bdevs_operational": 4, 00:11:23.933 "base_bdevs_list": [ 00:11:23.933 { 00:11:23.933 "name": "NewBaseBdev", 00:11:23.933 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:23.933 "is_configured": true, 00:11:23.933 "data_offset": 2048, 00:11:23.933 "data_size": 63488 00:11:23.933 }, 00:11:23.933 { 00:11:23.933 "name": "BaseBdev2", 00:11:23.933 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:23.933 "is_configured": true, 00:11:23.933 "data_offset": 2048, 00:11:23.933 "data_size": 63488 00:11:23.933 }, 00:11:23.933 { 00:11:23.933 "name": "BaseBdev3", 00:11:23.933 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:23.933 "is_configured": true, 00:11:23.933 "data_offset": 2048, 00:11:23.933 "data_size": 63488 00:11:23.933 }, 00:11:23.933 { 00:11:23.933 "name": "BaseBdev4", 00:11:23.933 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:23.933 "is_configured": true, 00:11:23.933 "data_offset": 2048, 00:11:23.933 "data_size": 63488 00:11:23.933 } 00:11:23.933 ] 00:11:23.933 }' 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.933 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.193 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.193 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.193 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.193 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.193 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.193 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.193 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.453 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.453 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.453 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.453 [2024-12-14 12:37:23.936236] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.453 12:37:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.453 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.453 "name": "Existed_Raid", 00:11:24.453 "aliases": [ 00:11:24.453 "783a220f-97fb-4dab-ae1d-b6640782f01f" 00:11:24.453 ], 00:11:24.453 "product_name": "Raid Volume", 00:11:24.453 "block_size": 512, 00:11:24.453 "num_blocks": 253952, 00:11:24.453 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:24.453 "assigned_rate_limits": { 00:11:24.453 "rw_ios_per_sec": 0, 00:11:24.453 "rw_mbytes_per_sec": 0, 00:11:24.453 "r_mbytes_per_sec": 0, 00:11:24.453 "w_mbytes_per_sec": 0 00:11:24.453 }, 00:11:24.453 "claimed": false, 00:11:24.453 "zoned": false, 00:11:24.453 "supported_io_types": { 00:11:24.453 "read": true, 00:11:24.453 "write": true, 00:11:24.453 "unmap": true, 00:11:24.453 "flush": true, 00:11:24.453 "reset": true, 00:11:24.453 "nvme_admin": false, 00:11:24.453 "nvme_io": false, 00:11:24.453 "nvme_io_md": false, 00:11:24.453 "write_zeroes": true, 00:11:24.453 "zcopy": false, 00:11:24.453 "get_zone_info": false, 00:11:24.453 "zone_management": false, 00:11:24.453 "zone_append": false, 00:11:24.453 "compare": false, 00:11:24.453 "compare_and_write": false, 00:11:24.453 "abort": false, 00:11:24.453 "seek_hole": false, 00:11:24.453 "seek_data": false, 00:11:24.453 "copy": false, 00:11:24.453 "nvme_iov_md": false 00:11:24.453 }, 00:11:24.453 "memory_domains": [ 00:11:24.453 { 00:11:24.453 "dma_device_id": "system", 00:11:24.453 "dma_device_type": 1 00:11:24.453 }, 00:11:24.453 { 00:11:24.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.453 "dma_device_type": 2 00:11:24.453 }, 00:11:24.453 { 00:11:24.453 "dma_device_id": "system", 00:11:24.453 "dma_device_type": 1 00:11:24.453 }, 00:11:24.453 { 00:11:24.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.454 "dma_device_type": 2 00:11:24.454 }, 00:11:24.454 { 00:11:24.454 "dma_device_id": "system", 00:11:24.454 "dma_device_type": 1 00:11:24.454 }, 00:11:24.454 { 00:11:24.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.454 "dma_device_type": 2 00:11:24.454 }, 00:11:24.454 { 00:11:24.454 "dma_device_id": "system", 00:11:24.454 "dma_device_type": 1 00:11:24.454 }, 00:11:24.454 { 00:11:24.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.454 "dma_device_type": 2 00:11:24.454 } 00:11:24.454 ], 00:11:24.454 "driver_specific": { 00:11:24.454 "raid": { 00:11:24.454 "uuid": "783a220f-97fb-4dab-ae1d-b6640782f01f", 00:11:24.454 "strip_size_kb": 64, 00:11:24.454 "state": "online", 00:11:24.454 "raid_level": "concat", 00:11:24.454 "superblock": true, 00:11:24.454 "num_base_bdevs": 4, 00:11:24.454 "num_base_bdevs_discovered": 4, 00:11:24.454 "num_base_bdevs_operational": 4, 00:11:24.454 "base_bdevs_list": [ 00:11:24.454 { 00:11:24.454 "name": "NewBaseBdev", 00:11:24.454 "uuid": "54ce8ab4-3d70-4e88-887b-c488635b2386", 00:11:24.454 "is_configured": true, 00:11:24.454 "data_offset": 2048, 00:11:24.454 "data_size": 63488 00:11:24.454 }, 00:11:24.454 { 00:11:24.454 "name": "BaseBdev2", 00:11:24.454 "uuid": "313f4391-5ce4-488b-bfb9-2c8589cf27a5", 00:11:24.454 "is_configured": true, 00:11:24.454 "data_offset": 2048, 00:11:24.454 "data_size": 63488 00:11:24.454 }, 00:11:24.454 { 00:11:24.454 "name": "BaseBdev3", 00:11:24.454 "uuid": "55c2274e-4d59-4ad6-8d6b-a9e5397db34d", 00:11:24.454 "is_configured": true, 00:11:24.454 "data_offset": 2048, 00:11:24.454 "data_size": 63488 00:11:24.454 }, 00:11:24.454 { 00:11:24.454 "name": "BaseBdev4", 00:11:24.454 "uuid": "19c7877b-7fbd-4b48-acc1-06a8679be9c4", 00:11:24.454 "is_configured": true, 00:11:24.454 "data_offset": 2048, 00:11:24.454 "data_size": 63488 00:11:24.454 } 00:11:24.454 ] 00:11:24.454 } 00:11:24.454 } 00:11:24.454 }' 00:11:24.454 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.454 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:24.454 BaseBdev2 00:11:24.454 BaseBdev3 00:11:24.454 BaseBdev4' 00:11:24.454 12:37:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.454 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.714 [2024-12-14 12:37:24.223370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.714 [2024-12-14 12:37:24.223444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:24.714 [2024-12-14 12:37:24.223539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:24.714 [2024-12-14 12:37:24.223642] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:24.714 [2024-12-14 12:37:24.223688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73733 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73733 ']' 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73733 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73733 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73733' 00:11:24.714 killing process with pid 73733 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73733 00:11:24.714 [2024-12-14 12:37:24.269915] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:24.714 12:37:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73733 00:11:24.974 [2024-12-14 12:37:24.658763] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.351 12:37:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:26.351 ************************************ 00:11:26.351 END TEST raid_state_function_test_sb 00:11:26.351 ************************************ 00:11:26.351 00:11:26.351 real 0m11.009s 00:11:26.351 user 0m17.427s 00:11:26.351 sys 0m1.935s 00:11:26.351 12:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.351 12:37:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.351 12:37:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:26.351 12:37:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:26.351 12:37:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.351 12:37:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.351 ************************************ 00:11:26.351 START TEST raid_superblock_test 00:11:26.351 ************************************ 00:11:26.351 12:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:26.351 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:26.351 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:26.351 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74401 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74401 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74401 ']' 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:26.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:26.352 12:37:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.352 [2024-12-14 12:37:25.915958] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:26.352 [2024-12-14 12:37:25.916204] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74401 ] 00:11:26.352 [2024-12-14 12:37:26.068522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.612 [2024-12-14 12:37:26.181735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.871 [2024-12-14 12:37:26.377455] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.871 [2024-12-14 12:37:26.377602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.131 malloc1 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.131 [2024-12-14 12:37:26.790654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:27.131 [2024-12-14 12:37:26.790752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.131 [2024-12-14 12:37:26.790789] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:27.131 [2024-12-14 12:37:26.790817] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.131 [2024-12-14 12:37:26.792891] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.131 [2024-12-14 12:37:26.792976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:27.131 pt1 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.131 malloc2 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.131 [2024-12-14 12:37:26.844406] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.131 [2024-12-14 12:37:26.844498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.131 [2024-12-14 12:37:26.844551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:27.131 [2024-12-14 12:37:26.844579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.131 [2024-12-14 12:37:26.846803] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.131 [2024-12-14 12:37:26.846874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.131 pt2 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.131 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.392 malloc3 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.392 [2024-12-14 12:37:26.916898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:27.392 [2024-12-14 12:37:26.916953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.392 [2024-12-14 12:37:26.916974] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:27.392 [2024-12-14 12:37:26.916983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.392 [2024-12-14 12:37:26.919154] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.392 [2024-12-14 12:37:26.919225] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:27.392 pt3 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.392 malloc4 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.392 [2024-12-14 12:37:26.973234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:27.392 [2024-12-14 12:37:26.973330] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.392 [2024-12-14 12:37:26.973369] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:27.392 [2024-12-14 12:37:26.973397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.392 [2024-12-14 12:37:26.975509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.392 [2024-12-14 12:37:26.975576] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:27.392 pt4 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.392 [2024-12-14 12:37:26.985249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:27.392 [2024-12-14 12:37:26.987147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.392 [2024-12-14 12:37:26.987274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:27.392 [2024-12-14 12:37:26.987368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:27.392 [2024-12-14 12:37:26.987627] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:27.392 [2024-12-14 12:37:26.987673] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:27.392 [2024-12-14 12:37:26.987942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:27.392 [2024-12-14 12:37:26.988159] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:27.392 [2024-12-14 12:37:26.988208] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:27.392 [2024-12-14 12:37:26.988396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.392 12:37:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.392 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.392 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.392 "name": "raid_bdev1", 00:11:27.392 "uuid": "687acf46-6dce-428d-86da-1c9632aae8be", 00:11:27.392 "strip_size_kb": 64, 00:11:27.392 "state": "online", 00:11:27.392 "raid_level": "concat", 00:11:27.392 "superblock": true, 00:11:27.392 "num_base_bdevs": 4, 00:11:27.392 "num_base_bdevs_discovered": 4, 00:11:27.392 "num_base_bdevs_operational": 4, 00:11:27.392 "base_bdevs_list": [ 00:11:27.392 { 00:11:27.392 "name": "pt1", 00:11:27.392 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.392 "is_configured": true, 00:11:27.392 "data_offset": 2048, 00:11:27.392 "data_size": 63488 00:11:27.392 }, 00:11:27.392 { 00:11:27.392 "name": "pt2", 00:11:27.392 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.392 "is_configured": true, 00:11:27.392 "data_offset": 2048, 00:11:27.392 "data_size": 63488 00:11:27.392 }, 00:11:27.392 { 00:11:27.392 "name": "pt3", 00:11:27.392 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.392 "is_configured": true, 00:11:27.392 "data_offset": 2048, 00:11:27.392 "data_size": 63488 00:11:27.392 }, 00:11:27.392 { 00:11:27.392 "name": "pt4", 00:11:27.392 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.392 "is_configured": true, 00:11:27.392 "data_offset": 2048, 00:11:27.392 "data_size": 63488 00:11:27.392 } 00:11:27.392 ] 00:11:27.392 }' 00:11:27.392 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.392 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.961 [2024-12-14 12:37:27.444771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:27.961 "name": "raid_bdev1", 00:11:27.961 "aliases": [ 00:11:27.961 "687acf46-6dce-428d-86da-1c9632aae8be" 00:11:27.961 ], 00:11:27.961 "product_name": "Raid Volume", 00:11:27.961 "block_size": 512, 00:11:27.961 "num_blocks": 253952, 00:11:27.961 "uuid": "687acf46-6dce-428d-86da-1c9632aae8be", 00:11:27.961 "assigned_rate_limits": { 00:11:27.961 "rw_ios_per_sec": 0, 00:11:27.961 "rw_mbytes_per_sec": 0, 00:11:27.961 "r_mbytes_per_sec": 0, 00:11:27.961 "w_mbytes_per_sec": 0 00:11:27.961 }, 00:11:27.961 "claimed": false, 00:11:27.961 "zoned": false, 00:11:27.961 "supported_io_types": { 00:11:27.961 "read": true, 00:11:27.961 "write": true, 00:11:27.961 "unmap": true, 00:11:27.961 "flush": true, 00:11:27.961 "reset": true, 00:11:27.961 "nvme_admin": false, 00:11:27.961 "nvme_io": false, 00:11:27.961 "nvme_io_md": false, 00:11:27.961 "write_zeroes": true, 00:11:27.961 "zcopy": false, 00:11:27.961 "get_zone_info": false, 00:11:27.961 "zone_management": false, 00:11:27.961 "zone_append": false, 00:11:27.961 "compare": false, 00:11:27.961 "compare_and_write": false, 00:11:27.961 "abort": false, 00:11:27.961 "seek_hole": false, 00:11:27.961 "seek_data": false, 00:11:27.961 "copy": false, 00:11:27.961 "nvme_iov_md": false 00:11:27.961 }, 00:11:27.961 "memory_domains": [ 00:11:27.961 { 00:11:27.961 "dma_device_id": "system", 00:11:27.961 "dma_device_type": 1 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.961 "dma_device_type": 2 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "dma_device_id": "system", 00:11:27.961 "dma_device_type": 1 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.961 "dma_device_type": 2 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "dma_device_id": "system", 00:11:27.961 "dma_device_type": 1 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.961 "dma_device_type": 2 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "dma_device_id": "system", 00:11:27.961 "dma_device_type": 1 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.961 "dma_device_type": 2 00:11:27.961 } 00:11:27.961 ], 00:11:27.961 "driver_specific": { 00:11:27.961 "raid": { 00:11:27.961 "uuid": "687acf46-6dce-428d-86da-1c9632aae8be", 00:11:27.961 "strip_size_kb": 64, 00:11:27.961 "state": "online", 00:11:27.961 "raid_level": "concat", 00:11:27.961 "superblock": true, 00:11:27.961 "num_base_bdevs": 4, 00:11:27.961 "num_base_bdevs_discovered": 4, 00:11:27.961 "num_base_bdevs_operational": 4, 00:11:27.961 "base_bdevs_list": [ 00:11:27.961 { 00:11:27.961 "name": "pt1", 00:11:27.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.961 "is_configured": true, 00:11:27.961 "data_offset": 2048, 00:11:27.961 "data_size": 63488 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "name": "pt2", 00:11:27.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.961 "is_configured": true, 00:11:27.961 "data_offset": 2048, 00:11:27.961 "data_size": 63488 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "name": "pt3", 00:11:27.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.961 "is_configured": true, 00:11:27.961 "data_offset": 2048, 00:11:27.961 "data_size": 63488 00:11:27.961 }, 00:11:27.961 { 00:11:27.961 "name": "pt4", 00:11:27.961 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.961 "is_configured": true, 00:11:27.961 "data_offset": 2048, 00:11:27.961 "data_size": 63488 00:11:27.961 } 00:11:27.961 ] 00:11:27.961 } 00:11:27.961 } 00:11:27.961 }' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:27.961 pt2 00:11:27.961 pt3 00:11:27.961 pt4' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.961 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 [2024-12-14 12:37:27.792139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=687acf46-6dce-428d-86da-1c9632aae8be 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 687acf46-6dce-428d-86da-1c9632aae8be ']' 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 [2024-12-14 12:37:27.839751] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.220 [2024-12-14 12:37:27.839814] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.220 [2024-12-14 12:37:27.839934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.220 [2024-12-14 12:37:27.840032] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.220 [2024-12-14 12:37:27.840090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.220 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.508 [2024-12-14 12:37:27.987549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:28.508 [2024-12-14 12:37:27.989494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:28.508 [2024-12-14 12:37:27.989541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:28.508 [2024-12-14 12:37:27.989573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:28.508 [2024-12-14 12:37:27.989622] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:28.508 [2024-12-14 12:37:27.989678] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:28.508 [2024-12-14 12:37:27.989696] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:28.508 [2024-12-14 12:37:27.989714] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:28.508 [2024-12-14 12:37:27.989727] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.508 [2024-12-14 12:37:27.989738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:28.508 request: 00:11:28.508 { 00:11:28.508 "name": "raid_bdev1", 00:11:28.508 "raid_level": "concat", 00:11:28.508 "base_bdevs": [ 00:11:28.508 "malloc1", 00:11:28.508 "malloc2", 00:11:28.508 "malloc3", 00:11:28.508 "malloc4" 00:11:28.508 ], 00:11:28.508 "strip_size_kb": 64, 00:11:28.508 "superblock": false, 00:11:28.508 "method": "bdev_raid_create", 00:11:28.508 "req_id": 1 00:11:28.508 } 00:11:28.508 Got JSON-RPC error response 00:11:28.508 response: 00:11:28.508 { 00:11:28.508 "code": -17, 00:11:28.508 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:28.508 } 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.508 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.508 12:37:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:28.508 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.508 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:28.508 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.509 [2024-12-14 12:37:28.055407] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.509 [2024-12-14 12:37:28.055519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.509 [2024-12-14 12:37:28.055553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:28.509 [2024-12-14 12:37:28.055584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.509 [2024-12-14 12:37:28.057910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.509 [2024-12-14 12:37:28.057988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.509 [2024-12-14 12:37:28.058118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:28.509 [2024-12-14 12:37:28.058221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.509 pt1 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.509 "name": "raid_bdev1", 00:11:28.509 "uuid": "687acf46-6dce-428d-86da-1c9632aae8be", 00:11:28.509 "strip_size_kb": 64, 00:11:28.509 "state": "configuring", 00:11:28.509 "raid_level": "concat", 00:11:28.509 "superblock": true, 00:11:28.509 "num_base_bdevs": 4, 00:11:28.509 "num_base_bdevs_discovered": 1, 00:11:28.509 "num_base_bdevs_operational": 4, 00:11:28.509 "base_bdevs_list": [ 00:11:28.509 { 00:11:28.509 "name": "pt1", 00:11:28.509 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.509 "is_configured": true, 00:11:28.509 "data_offset": 2048, 00:11:28.509 "data_size": 63488 00:11:28.509 }, 00:11:28.509 { 00:11:28.509 "name": null, 00:11:28.509 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.509 "is_configured": false, 00:11:28.509 "data_offset": 2048, 00:11:28.509 "data_size": 63488 00:11:28.509 }, 00:11:28.509 { 00:11:28.509 "name": null, 00:11:28.509 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.509 "is_configured": false, 00:11:28.509 "data_offset": 2048, 00:11:28.509 "data_size": 63488 00:11:28.509 }, 00:11:28.509 { 00:11:28.509 "name": null, 00:11:28.509 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.509 "is_configured": false, 00:11:28.509 "data_offset": 2048, 00:11:28.509 "data_size": 63488 00:11:28.509 } 00:11:28.509 ] 00:11:28.509 }' 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.509 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.092 [2024-12-14 12:37:28.534614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:29.092 [2024-12-14 12:37:28.534749] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.092 [2024-12-14 12:37:28.534788] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:29.092 [2024-12-14 12:37:28.534819] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.092 [2024-12-14 12:37:28.535358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.092 [2024-12-14 12:37:28.535434] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:29.092 [2024-12-14 12:37:28.535544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:29.092 [2024-12-14 12:37:28.535598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:29.092 pt2 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.092 [2024-12-14 12:37:28.542572] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.092 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.093 "name": "raid_bdev1", 00:11:29.093 "uuid": "687acf46-6dce-428d-86da-1c9632aae8be", 00:11:29.093 "strip_size_kb": 64, 00:11:29.093 "state": "configuring", 00:11:29.093 "raid_level": "concat", 00:11:29.093 "superblock": true, 00:11:29.093 "num_base_bdevs": 4, 00:11:29.093 "num_base_bdevs_discovered": 1, 00:11:29.093 "num_base_bdevs_operational": 4, 00:11:29.093 "base_bdevs_list": [ 00:11:29.093 { 00:11:29.093 "name": "pt1", 00:11:29.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.093 "is_configured": true, 00:11:29.093 "data_offset": 2048, 00:11:29.093 "data_size": 63488 00:11:29.093 }, 00:11:29.093 { 00:11:29.093 "name": null, 00:11:29.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.093 "is_configured": false, 00:11:29.093 "data_offset": 0, 00:11:29.093 "data_size": 63488 00:11:29.093 }, 00:11:29.093 { 00:11:29.093 "name": null, 00:11:29.093 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.093 "is_configured": false, 00:11:29.093 "data_offset": 2048, 00:11:29.093 "data_size": 63488 00:11:29.093 }, 00:11:29.093 { 00:11:29.093 "name": null, 00:11:29.093 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.093 "is_configured": false, 00:11:29.093 "data_offset": 2048, 00:11:29.093 "data_size": 63488 00:11:29.093 } 00:11:29.093 ] 00:11:29.093 }' 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.093 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.353 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:29.353 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.353 12:37:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:29.353 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.353 12:37:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.353 [2024-12-14 12:37:29.001874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:29.353 [2024-12-14 12:37:29.002002] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.353 [2024-12-14 12:37:29.002054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:29.353 [2024-12-14 12:37:29.002087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.353 [2024-12-14 12:37:29.002687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.353 [2024-12-14 12:37:29.002752] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:29.353 [2024-12-14 12:37:29.002889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:29.353 [2024-12-14 12:37:29.002947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:29.353 pt2 00:11:29.353 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.353 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.353 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.353 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:29.353 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.353 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.353 [2024-12-14 12:37:29.013820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:29.354 [2024-12-14 12:37:29.013906] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.354 [2024-12-14 12:37:29.013941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:29.354 [2024-12-14 12:37:29.013969] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.354 [2024-12-14 12:37:29.014419] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.354 [2024-12-14 12:37:29.014479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:29.354 [2024-12-14 12:37:29.014559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:29.354 [2024-12-14 12:37:29.014589] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:29.354 pt3 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.354 [2024-12-14 12:37:29.025774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:29.354 [2024-12-14 12:37:29.025848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.354 [2024-12-14 12:37:29.025880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:29.354 [2024-12-14 12:37:29.025906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.354 [2024-12-14 12:37:29.026338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.354 [2024-12-14 12:37:29.026397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:29.354 [2024-12-14 12:37:29.026495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:29.354 [2024-12-14 12:37:29.026550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:29.354 [2024-12-14 12:37:29.026731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:29.354 [2024-12-14 12:37:29.026773] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:29.354 [2024-12-14 12:37:29.027065] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:29.354 [2024-12-14 12:37:29.027260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:29.354 [2024-12-14 12:37:29.027316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:29.354 [2024-12-14 12:37:29.027525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.354 pt4 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.354 "name": "raid_bdev1", 00:11:29.354 "uuid": "687acf46-6dce-428d-86da-1c9632aae8be", 00:11:29.354 "strip_size_kb": 64, 00:11:29.354 "state": "online", 00:11:29.354 "raid_level": "concat", 00:11:29.354 "superblock": true, 00:11:29.354 "num_base_bdevs": 4, 00:11:29.354 "num_base_bdevs_discovered": 4, 00:11:29.354 "num_base_bdevs_operational": 4, 00:11:29.354 "base_bdevs_list": [ 00:11:29.354 { 00:11:29.354 "name": "pt1", 00:11:29.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.354 "is_configured": true, 00:11:29.354 "data_offset": 2048, 00:11:29.354 "data_size": 63488 00:11:29.354 }, 00:11:29.354 { 00:11:29.354 "name": "pt2", 00:11:29.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.354 "is_configured": true, 00:11:29.354 "data_offset": 2048, 00:11:29.354 "data_size": 63488 00:11:29.354 }, 00:11:29.354 { 00:11:29.354 "name": "pt3", 00:11:29.354 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.354 "is_configured": true, 00:11:29.354 "data_offset": 2048, 00:11:29.354 "data_size": 63488 00:11:29.354 }, 00:11:29.354 { 00:11:29.354 "name": "pt4", 00:11:29.354 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.354 "is_configured": true, 00:11:29.354 "data_offset": 2048, 00:11:29.354 "data_size": 63488 00:11:29.354 } 00:11:29.354 ] 00:11:29.354 }' 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.354 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.922 [2024-12-14 12:37:29.457414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.922 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:29.922 "name": "raid_bdev1", 00:11:29.922 "aliases": [ 00:11:29.922 "687acf46-6dce-428d-86da-1c9632aae8be" 00:11:29.922 ], 00:11:29.922 "product_name": "Raid Volume", 00:11:29.922 "block_size": 512, 00:11:29.922 "num_blocks": 253952, 00:11:29.922 "uuid": "687acf46-6dce-428d-86da-1c9632aae8be", 00:11:29.922 "assigned_rate_limits": { 00:11:29.923 "rw_ios_per_sec": 0, 00:11:29.923 "rw_mbytes_per_sec": 0, 00:11:29.923 "r_mbytes_per_sec": 0, 00:11:29.923 "w_mbytes_per_sec": 0 00:11:29.923 }, 00:11:29.923 "claimed": false, 00:11:29.923 "zoned": false, 00:11:29.923 "supported_io_types": { 00:11:29.923 "read": true, 00:11:29.923 "write": true, 00:11:29.923 "unmap": true, 00:11:29.923 "flush": true, 00:11:29.923 "reset": true, 00:11:29.923 "nvme_admin": false, 00:11:29.923 "nvme_io": false, 00:11:29.923 "nvme_io_md": false, 00:11:29.923 "write_zeroes": true, 00:11:29.923 "zcopy": false, 00:11:29.923 "get_zone_info": false, 00:11:29.923 "zone_management": false, 00:11:29.923 "zone_append": false, 00:11:29.923 "compare": false, 00:11:29.923 "compare_and_write": false, 00:11:29.923 "abort": false, 00:11:29.923 "seek_hole": false, 00:11:29.923 "seek_data": false, 00:11:29.923 "copy": false, 00:11:29.923 "nvme_iov_md": false 00:11:29.923 }, 00:11:29.923 "memory_domains": [ 00:11:29.923 { 00:11:29.923 "dma_device_id": "system", 00:11:29.923 "dma_device_type": 1 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.923 "dma_device_type": 2 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "dma_device_id": "system", 00:11:29.923 "dma_device_type": 1 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.923 "dma_device_type": 2 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "dma_device_id": "system", 00:11:29.923 "dma_device_type": 1 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.923 "dma_device_type": 2 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "dma_device_id": "system", 00:11:29.923 "dma_device_type": 1 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.923 "dma_device_type": 2 00:11:29.923 } 00:11:29.923 ], 00:11:29.923 "driver_specific": { 00:11:29.923 "raid": { 00:11:29.923 "uuid": "687acf46-6dce-428d-86da-1c9632aae8be", 00:11:29.923 "strip_size_kb": 64, 00:11:29.923 "state": "online", 00:11:29.923 "raid_level": "concat", 00:11:29.923 "superblock": true, 00:11:29.923 "num_base_bdevs": 4, 00:11:29.923 "num_base_bdevs_discovered": 4, 00:11:29.923 "num_base_bdevs_operational": 4, 00:11:29.923 "base_bdevs_list": [ 00:11:29.923 { 00:11:29.923 "name": "pt1", 00:11:29.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:29.923 "is_configured": true, 00:11:29.923 "data_offset": 2048, 00:11:29.923 "data_size": 63488 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "name": "pt2", 00:11:29.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.923 "is_configured": true, 00:11:29.923 "data_offset": 2048, 00:11:29.923 "data_size": 63488 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "name": "pt3", 00:11:29.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.923 "is_configured": true, 00:11:29.923 "data_offset": 2048, 00:11:29.923 "data_size": 63488 00:11:29.923 }, 00:11:29.923 { 00:11:29.923 "name": "pt4", 00:11:29.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.923 "is_configured": true, 00:11:29.923 "data_offset": 2048, 00:11:29.923 "data_size": 63488 00:11:29.923 } 00:11:29.923 ] 00:11:29.923 } 00:11:29.923 } 00:11:29.923 }' 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:29.923 pt2 00:11:29.923 pt3 00:11:29.923 pt4' 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.923 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.182 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.183 [2024-12-14 12:37:29.780808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 687acf46-6dce-428d-86da-1c9632aae8be '!=' 687acf46-6dce-428d-86da-1c9632aae8be ']' 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74401 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74401 ']' 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74401 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74401 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74401' 00:11:30.183 killing process with pid 74401 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74401 00:11:30.183 [2024-12-14 12:37:29.852769] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.183 12:37:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74401 00:11:30.183 [2024-12-14 12:37:29.852914] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.183 [2024-12-14 12:37:29.853019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.183 [2024-12-14 12:37:29.853078] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:30.752 [2024-12-14 12:37:30.257901] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.127 12:37:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:32.127 00:11:32.127 real 0m5.627s 00:11:32.127 user 0m8.084s 00:11:32.127 sys 0m0.899s 00:11:32.127 12:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.127 ************************************ 00:11:32.127 END TEST raid_superblock_test 00:11:32.127 ************************************ 00:11:32.127 12:37:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.127 12:37:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:32.127 12:37:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:32.127 12:37:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.127 12:37:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.127 ************************************ 00:11:32.127 START TEST raid_read_error_test 00:11:32.127 ************************************ 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1Rlem0jD8R 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74668 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74668 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74668 ']' 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.127 12:37:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.127 [2024-12-14 12:37:31.633806] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:32.127 [2024-12-14 12:37:31.633925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74668 ] 00:11:32.128 [2024-12-14 12:37:31.810877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.386 [2024-12-14 12:37:31.929976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.645 [2024-12-14 12:37:32.149851] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.645 [2024-12-14 12:37:32.149895] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 BaseBdev1_malloc 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 true 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 [2024-12-14 12:37:32.551384] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:32.905 [2024-12-14 12:37:32.551445] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.905 [2024-12-14 12:37:32.551469] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:32.905 [2024-12-14 12:37:32.551487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.905 [2024-12-14 12:37:32.553906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.905 [2024-12-14 12:37:32.554051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:32.905 BaseBdev1 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 BaseBdev2_malloc 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 true 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.905 [2024-12-14 12:37:32.618974] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:32.905 [2024-12-14 12:37:32.619110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.905 [2024-12-14 12:37:32.619138] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:32.905 [2024-12-14 12:37:32.619153] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.905 [2024-12-14 12:37:32.621657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.905 [2024-12-14 12:37:32.621701] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:32.905 BaseBdev2 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.905 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.165 BaseBdev3_malloc 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.165 true 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.165 [2024-12-14 12:37:32.708937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:33.165 [2024-12-14 12:37:32.709046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.165 [2024-12-14 12:37:32.709072] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:33.165 [2024-12-14 12:37:32.709084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.165 [2024-12-14 12:37:32.711425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.165 [2024-12-14 12:37:32.711469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:33.165 BaseBdev3 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.165 BaseBdev4_malloc 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.165 true 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.165 [2024-12-14 12:37:32.777955] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:33.165 [2024-12-14 12:37:32.778010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.165 [2024-12-14 12:37:32.778031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:33.165 [2024-12-14 12:37:32.778055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.165 [2024-12-14 12:37:32.780517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.165 [2024-12-14 12:37:32.780568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:33.165 BaseBdev4 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.165 [2024-12-14 12:37:32.790035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:33.165 [2024-12-14 12:37:32.792278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.165 [2024-12-14 12:37:32.792367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:33.165 [2024-12-14 12:37:32.792438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:33.165 [2024-12-14 12:37:32.792696] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:33.165 [2024-12-14 12:37:32.792713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:33.165 [2024-12-14 12:37:32.792978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:33.165 [2024-12-14 12:37:32.793205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:33.165 [2024-12-14 12:37:32.793220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:33.165 [2024-12-14 12:37:32.793397] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.165 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.165 "name": "raid_bdev1", 00:11:33.165 "uuid": "bb4ac528-50d9-42e8-9723-0e445e2375e3", 00:11:33.165 "strip_size_kb": 64, 00:11:33.165 "state": "online", 00:11:33.165 "raid_level": "concat", 00:11:33.165 "superblock": true, 00:11:33.165 "num_base_bdevs": 4, 00:11:33.165 "num_base_bdevs_discovered": 4, 00:11:33.165 "num_base_bdevs_operational": 4, 00:11:33.165 "base_bdevs_list": [ 00:11:33.165 { 00:11:33.165 "name": "BaseBdev1", 00:11:33.165 "uuid": "cc5aad16-c22e-5142-ae11-fe7316fe07f4", 00:11:33.165 "is_configured": true, 00:11:33.165 "data_offset": 2048, 00:11:33.165 "data_size": 63488 00:11:33.165 }, 00:11:33.165 { 00:11:33.165 "name": "BaseBdev2", 00:11:33.166 "uuid": "298610b0-f1ce-5bc3-8564-0cadf5534542", 00:11:33.166 "is_configured": true, 00:11:33.166 "data_offset": 2048, 00:11:33.166 "data_size": 63488 00:11:33.166 }, 00:11:33.166 { 00:11:33.166 "name": "BaseBdev3", 00:11:33.166 "uuid": "a9380ce0-4d4e-5ff5-beec-2cdedb4e43f4", 00:11:33.166 "is_configured": true, 00:11:33.166 "data_offset": 2048, 00:11:33.166 "data_size": 63488 00:11:33.166 }, 00:11:33.166 { 00:11:33.166 "name": "BaseBdev4", 00:11:33.166 "uuid": "52280646-85be-588f-8c25-a5c5a2364df2", 00:11:33.166 "is_configured": true, 00:11:33.166 "data_offset": 2048, 00:11:33.166 "data_size": 63488 00:11:33.166 } 00:11:33.166 ] 00:11:33.166 }' 00:11:33.166 12:37:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.166 12:37:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.733 12:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:33.733 12:37:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:33.733 [2024-12-14 12:37:33.374544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.670 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.670 "name": "raid_bdev1", 00:11:34.670 "uuid": "bb4ac528-50d9-42e8-9723-0e445e2375e3", 00:11:34.670 "strip_size_kb": 64, 00:11:34.670 "state": "online", 00:11:34.670 "raid_level": "concat", 00:11:34.670 "superblock": true, 00:11:34.670 "num_base_bdevs": 4, 00:11:34.670 "num_base_bdevs_discovered": 4, 00:11:34.670 "num_base_bdevs_operational": 4, 00:11:34.670 "base_bdevs_list": [ 00:11:34.670 { 00:11:34.670 "name": "BaseBdev1", 00:11:34.670 "uuid": "cc5aad16-c22e-5142-ae11-fe7316fe07f4", 00:11:34.670 "is_configured": true, 00:11:34.670 "data_offset": 2048, 00:11:34.670 "data_size": 63488 00:11:34.670 }, 00:11:34.670 { 00:11:34.670 "name": "BaseBdev2", 00:11:34.670 "uuid": "298610b0-f1ce-5bc3-8564-0cadf5534542", 00:11:34.670 "is_configured": true, 00:11:34.670 "data_offset": 2048, 00:11:34.670 "data_size": 63488 00:11:34.670 }, 00:11:34.670 { 00:11:34.670 "name": "BaseBdev3", 00:11:34.670 "uuid": "a9380ce0-4d4e-5ff5-beec-2cdedb4e43f4", 00:11:34.670 "is_configured": true, 00:11:34.670 "data_offset": 2048, 00:11:34.670 "data_size": 63488 00:11:34.670 }, 00:11:34.670 { 00:11:34.670 "name": "BaseBdev4", 00:11:34.670 "uuid": "52280646-85be-588f-8c25-a5c5a2364df2", 00:11:34.670 "is_configured": true, 00:11:34.670 "data_offset": 2048, 00:11:34.670 "data_size": 63488 00:11:34.670 } 00:11:34.671 ] 00:11:34.671 }' 00:11:34.671 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.671 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.240 [2024-12-14 12:37:34.706954] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:35.240 [2024-12-14 12:37:34.707091] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.240 [2024-12-14 12:37:34.710410] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.240 [2024-12-14 12:37:34.710476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.240 [2024-12-14 12:37:34.710532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.240 [2024-12-14 12:37:34.710548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:35.240 { 00:11:35.240 "results": [ 00:11:35.240 { 00:11:35.240 "job": "raid_bdev1", 00:11:35.240 "core_mask": "0x1", 00:11:35.240 "workload": "randrw", 00:11:35.240 "percentage": 50, 00:11:35.240 "status": "finished", 00:11:35.240 "queue_depth": 1, 00:11:35.240 "io_size": 131072, 00:11:35.240 "runtime": 1.333123, 00:11:35.240 "iops": 14148.73196246708, 00:11:35.240 "mibps": 1768.591495308385, 00:11:35.240 "io_failed": 1, 00:11:35.240 "io_timeout": 0, 00:11:35.240 "avg_latency_us": 97.84844256228604, 00:11:35.240 "min_latency_us": 27.388646288209607, 00:11:35.240 "max_latency_us": 1459.5353711790392 00:11:35.240 } 00:11:35.240 ], 00:11:35.240 "core_count": 1 00:11:35.240 } 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74668 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74668 ']' 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74668 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74668 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74668' 00:11:35.240 killing process with pid 74668 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74668 00:11:35.240 [2024-12-14 12:37:34.757005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.240 12:37:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74668 00:11:35.500 [2024-12-14 12:37:35.106713] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1Rlem0jD8R 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:11:36.878 00:11:36.878 real 0m4.832s 00:11:36.878 user 0m5.718s 00:11:36.878 sys 0m0.565s 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.878 12:37:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.878 ************************************ 00:11:36.878 END TEST raid_read_error_test 00:11:36.878 ************************************ 00:11:36.878 12:37:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:36.878 12:37:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:36.878 12:37:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.878 12:37:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.878 ************************************ 00:11:36.878 START TEST raid_write_error_test 00:11:36.878 ************************************ 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.4ZHgZUv2Jo 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74817 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74817 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74817 ']' 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.878 12:37:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.878 [2024-12-14 12:37:36.542409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:36.878 [2024-12-14 12:37:36.542532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74817 ] 00:11:37.137 [2024-12-14 12:37:36.718440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:37.137 [2024-12-14 12:37:36.848319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.396 [2024-12-14 12:37:37.084325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.396 [2024-12-14 12:37:37.084506] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 BaseBdev1_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 true 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 [2024-12-14 12:37:37.479198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:38.018 [2024-12-14 12:37:37.479338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.018 [2024-12-14 12:37:37.479387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:38.018 [2024-12-14 12:37:37.479437] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.018 [2024-12-14 12:37:37.482032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.018 [2024-12-14 12:37:37.482140] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:38.018 BaseBdev1 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 BaseBdev2_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 true 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 [2024-12-14 12:37:37.553804] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:38.018 [2024-12-14 12:37:37.553918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.018 [2024-12-14 12:37:37.553958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:38.018 [2024-12-14 12:37:37.553971] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.018 [2024-12-14 12:37:37.556442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.018 [2024-12-14 12:37:37.556487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:38.018 BaseBdev2 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 BaseBdev3_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 true 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 [2024-12-14 12:37:37.636321] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:38.018 [2024-12-14 12:37:37.636427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.018 [2024-12-14 12:37:37.636466] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:38.018 [2024-12-14 12:37:37.636501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.018 [2024-12-14 12:37:37.638829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.018 [2024-12-14 12:37:37.638912] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:38.018 BaseBdev3 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 BaseBdev4_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 true 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 [2024-12-14 12:37:37.708367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:38.018 [2024-12-14 12:37:37.708428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.018 [2024-12-14 12:37:37.708448] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:38.018 [2024-12-14 12:37:37.708461] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.018 [2024-12-14 12:37:37.710903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.018 [2024-12-14 12:37:37.710951] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:38.018 BaseBdev4 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 [2024-12-14 12:37:37.720419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.018 [2024-12-14 12:37:37.722528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:38.018 [2024-12-14 12:37:37.722617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:38.018 [2024-12-14 12:37:37.722689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:38.018 [2024-12-14 12:37:37.722953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:38.018 [2024-12-14 12:37:37.722971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:38.018 [2024-12-14 12:37:37.723261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:38.018 [2024-12-14 12:37:37.723524] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:38.018 [2024-12-14 12:37:37.723542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:38.018 [2024-12-14 12:37:37.723728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.018 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.278 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.278 "name": "raid_bdev1", 00:11:38.278 "uuid": "e0965c90-3424-4f72-bc6a-64f899dac3fe", 00:11:38.278 "strip_size_kb": 64, 00:11:38.278 "state": "online", 00:11:38.278 "raid_level": "concat", 00:11:38.278 "superblock": true, 00:11:38.278 "num_base_bdevs": 4, 00:11:38.278 "num_base_bdevs_discovered": 4, 00:11:38.278 "num_base_bdevs_operational": 4, 00:11:38.278 "base_bdevs_list": [ 00:11:38.278 { 00:11:38.278 "name": "BaseBdev1", 00:11:38.278 "uuid": "8474c843-f888-538a-bbfe-2f3706a933b3", 00:11:38.278 "is_configured": true, 00:11:38.278 "data_offset": 2048, 00:11:38.278 "data_size": 63488 00:11:38.278 }, 00:11:38.278 { 00:11:38.278 "name": "BaseBdev2", 00:11:38.278 "uuid": "9725d40f-e872-54f4-96e2-bce5bf4e1ec0", 00:11:38.278 "is_configured": true, 00:11:38.278 "data_offset": 2048, 00:11:38.278 "data_size": 63488 00:11:38.278 }, 00:11:38.278 { 00:11:38.278 "name": "BaseBdev3", 00:11:38.278 "uuid": "570c866d-9247-5369-bacf-a56bce7e2d20", 00:11:38.278 "is_configured": true, 00:11:38.278 "data_offset": 2048, 00:11:38.278 "data_size": 63488 00:11:38.278 }, 00:11:38.278 { 00:11:38.278 "name": "BaseBdev4", 00:11:38.278 "uuid": "4b65bc65-3fe7-5605-8f79-2b7ef20726fc", 00:11:38.278 "is_configured": true, 00:11:38.278 "data_offset": 2048, 00:11:38.278 "data_size": 63488 00:11:38.278 } 00:11:38.278 ] 00:11:38.278 }' 00:11:38.278 12:37:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.278 12:37:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.537 12:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:38.537 12:37:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:38.797 [2024-12-14 12:37:38.292840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.736 "name": "raid_bdev1", 00:11:39.736 "uuid": "e0965c90-3424-4f72-bc6a-64f899dac3fe", 00:11:39.736 "strip_size_kb": 64, 00:11:39.736 "state": "online", 00:11:39.736 "raid_level": "concat", 00:11:39.736 "superblock": true, 00:11:39.736 "num_base_bdevs": 4, 00:11:39.736 "num_base_bdevs_discovered": 4, 00:11:39.736 "num_base_bdevs_operational": 4, 00:11:39.736 "base_bdevs_list": [ 00:11:39.736 { 00:11:39.736 "name": "BaseBdev1", 00:11:39.736 "uuid": "8474c843-f888-538a-bbfe-2f3706a933b3", 00:11:39.736 "is_configured": true, 00:11:39.736 "data_offset": 2048, 00:11:39.736 "data_size": 63488 00:11:39.736 }, 00:11:39.736 { 00:11:39.736 "name": "BaseBdev2", 00:11:39.736 "uuid": "9725d40f-e872-54f4-96e2-bce5bf4e1ec0", 00:11:39.736 "is_configured": true, 00:11:39.736 "data_offset": 2048, 00:11:39.736 "data_size": 63488 00:11:39.736 }, 00:11:39.736 { 00:11:39.736 "name": "BaseBdev3", 00:11:39.736 "uuid": "570c866d-9247-5369-bacf-a56bce7e2d20", 00:11:39.736 "is_configured": true, 00:11:39.736 "data_offset": 2048, 00:11:39.736 "data_size": 63488 00:11:39.736 }, 00:11:39.736 { 00:11:39.736 "name": "BaseBdev4", 00:11:39.736 "uuid": "4b65bc65-3fe7-5605-8f79-2b7ef20726fc", 00:11:39.736 "is_configured": true, 00:11:39.736 "data_offset": 2048, 00:11:39.736 "data_size": 63488 00:11:39.736 } 00:11:39.736 ] 00:11:39.736 }' 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.736 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.995 [2024-12-14 12:37:39.698233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.995 [2024-12-14 12:37:39.698328] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.995 [2024-12-14 12:37:39.701249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.995 [2024-12-14 12:37:39.701351] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.995 [2024-12-14 12:37:39.701418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.995 [2024-12-14 12:37:39.701486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:39.995 { 00:11:39.995 "results": [ 00:11:39.995 { 00:11:39.995 "job": "raid_bdev1", 00:11:39.995 "core_mask": "0x1", 00:11:39.995 "workload": "randrw", 00:11:39.995 "percentage": 50, 00:11:39.995 "status": "finished", 00:11:39.995 "queue_depth": 1, 00:11:39.995 "io_size": 131072, 00:11:39.995 "runtime": 1.40609, 00:11:39.995 "iops": 13605.814706028776, 00:11:39.995 "mibps": 1700.726838253597, 00:11:39.995 "io_failed": 1, 00:11:39.995 "io_timeout": 0, 00:11:39.995 "avg_latency_us": 101.57508716734212, 00:11:39.995 "min_latency_us": 27.72401746724891, 00:11:39.995 "max_latency_us": 1752.8733624454148 00:11:39.995 } 00:11:39.995 ], 00:11:39.995 "core_count": 1 00:11:39.995 } 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74817 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74817 ']' 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74817 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.995 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74817 00:11:40.254 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:40.254 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:40.254 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74817' 00:11:40.254 killing process with pid 74817 00:11:40.254 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74817 00:11:40.254 [2024-12-14 12:37:39.744727] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:40.254 12:37:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74817 00:11:40.516 [2024-12-14 12:37:40.109721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.4ZHgZUv2Jo 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:41.895 ************************************ 00:11:41.895 END TEST raid_write_error_test 00:11:41.895 ************************************ 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:41.895 00:11:41.895 real 0m4.858s 00:11:41.895 user 0m5.805s 00:11:41.895 sys 0m0.600s 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:41.895 12:37:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.895 12:37:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:41.895 12:37:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:41.895 12:37:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:41.895 12:37:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:41.895 12:37:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:41.895 ************************************ 00:11:41.895 START TEST raid_state_function_test 00:11:41.895 ************************************ 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74959 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74959' 00:11:41.895 Process raid pid: 74959 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74959 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74959 ']' 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.895 12:37:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.895 [2024-12-14 12:37:41.461688] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:41.896 [2024-12-14 12:37:41.461879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.896 [2024-12-14 12:37:41.615694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.155 [2024-12-14 12:37:41.729180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.415 [2024-12-14 12:37:41.931665] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.415 [2024-12-14 12:37:41.931805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.674 [2024-12-14 12:37:42.295388] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.674 [2024-12-14 12:37:42.295520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.674 [2024-12-14 12:37:42.295554] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.674 [2024-12-14 12:37:42.295579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.674 [2024-12-14 12:37:42.295604] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.674 [2024-12-14 12:37:42.295630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.674 [2024-12-14 12:37:42.295649] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.674 [2024-12-14 12:37:42.295670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.674 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.674 "name": "Existed_Raid", 00:11:42.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.674 "strip_size_kb": 0, 00:11:42.674 "state": "configuring", 00:11:42.674 "raid_level": "raid1", 00:11:42.674 "superblock": false, 00:11:42.674 "num_base_bdevs": 4, 00:11:42.674 "num_base_bdevs_discovered": 0, 00:11:42.674 "num_base_bdevs_operational": 4, 00:11:42.674 "base_bdevs_list": [ 00:11:42.674 { 00:11:42.675 "name": "BaseBdev1", 00:11:42.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.675 "is_configured": false, 00:11:42.675 "data_offset": 0, 00:11:42.675 "data_size": 0 00:11:42.675 }, 00:11:42.675 { 00:11:42.675 "name": "BaseBdev2", 00:11:42.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.675 "is_configured": false, 00:11:42.675 "data_offset": 0, 00:11:42.675 "data_size": 0 00:11:42.675 }, 00:11:42.675 { 00:11:42.675 "name": "BaseBdev3", 00:11:42.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.675 "is_configured": false, 00:11:42.675 "data_offset": 0, 00:11:42.675 "data_size": 0 00:11:42.675 }, 00:11:42.675 { 00:11:42.675 "name": "BaseBdev4", 00:11:42.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.675 "is_configured": false, 00:11:42.675 "data_offset": 0, 00:11:42.675 "data_size": 0 00:11:42.675 } 00:11:42.675 ] 00:11:42.675 }' 00:11:42.675 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.675 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.244 [2024-12-14 12:37:42.722644] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.244 [2024-12-14 12:37:42.722746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.244 [2024-12-14 12:37:42.734611] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:43.244 [2024-12-14 12:37:42.734659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:43.244 [2024-12-14 12:37:42.734670] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.244 [2024-12-14 12:37:42.734681] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.244 [2024-12-14 12:37:42.734688] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.244 [2024-12-14 12:37:42.734698] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.244 [2024-12-14 12:37:42.734705] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.244 [2024-12-14 12:37:42.734715] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.244 [2024-12-14 12:37:42.783419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.244 BaseBdev1 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:43.244 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.245 [ 00:11:43.245 { 00:11:43.245 "name": "BaseBdev1", 00:11:43.245 "aliases": [ 00:11:43.245 "88020de2-1bfa-4355-b32e-2d86d872c674" 00:11:43.245 ], 00:11:43.245 "product_name": "Malloc disk", 00:11:43.245 "block_size": 512, 00:11:43.245 "num_blocks": 65536, 00:11:43.245 "uuid": "88020de2-1bfa-4355-b32e-2d86d872c674", 00:11:43.245 "assigned_rate_limits": { 00:11:43.245 "rw_ios_per_sec": 0, 00:11:43.245 "rw_mbytes_per_sec": 0, 00:11:43.245 "r_mbytes_per_sec": 0, 00:11:43.245 "w_mbytes_per_sec": 0 00:11:43.245 }, 00:11:43.245 "claimed": true, 00:11:43.245 "claim_type": "exclusive_write", 00:11:43.245 "zoned": false, 00:11:43.245 "supported_io_types": { 00:11:43.245 "read": true, 00:11:43.245 "write": true, 00:11:43.245 "unmap": true, 00:11:43.245 "flush": true, 00:11:43.245 "reset": true, 00:11:43.245 "nvme_admin": false, 00:11:43.245 "nvme_io": false, 00:11:43.245 "nvme_io_md": false, 00:11:43.245 "write_zeroes": true, 00:11:43.245 "zcopy": true, 00:11:43.245 "get_zone_info": false, 00:11:43.245 "zone_management": false, 00:11:43.245 "zone_append": false, 00:11:43.245 "compare": false, 00:11:43.245 "compare_and_write": false, 00:11:43.245 "abort": true, 00:11:43.245 "seek_hole": false, 00:11:43.245 "seek_data": false, 00:11:43.245 "copy": true, 00:11:43.245 "nvme_iov_md": false 00:11:43.245 }, 00:11:43.245 "memory_domains": [ 00:11:43.245 { 00:11:43.245 "dma_device_id": "system", 00:11:43.245 "dma_device_type": 1 00:11:43.245 }, 00:11:43.245 { 00:11:43.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.245 "dma_device_type": 2 00:11:43.245 } 00:11:43.245 ], 00:11:43.245 "driver_specific": {} 00:11:43.245 } 00:11:43.245 ] 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.245 "name": "Existed_Raid", 00:11:43.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.245 "strip_size_kb": 0, 00:11:43.245 "state": "configuring", 00:11:43.245 "raid_level": "raid1", 00:11:43.245 "superblock": false, 00:11:43.245 "num_base_bdevs": 4, 00:11:43.245 "num_base_bdevs_discovered": 1, 00:11:43.245 "num_base_bdevs_operational": 4, 00:11:43.245 "base_bdevs_list": [ 00:11:43.245 { 00:11:43.245 "name": "BaseBdev1", 00:11:43.245 "uuid": "88020de2-1bfa-4355-b32e-2d86d872c674", 00:11:43.245 "is_configured": true, 00:11:43.245 "data_offset": 0, 00:11:43.245 "data_size": 65536 00:11:43.245 }, 00:11:43.245 { 00:11:43.245 "name": "BaseBdev2", 00:11:43.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.245 "is_configured": false, 00:11:43.245 "data_offset": 0, 00:11:43.245 "data_size": 0 00:11:43.245 }, 00:11:43.245 { 00:11:43.245 "name": "BaseBdev3", 00:11:43.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.245 "is_configured": false, 00:11:43.245 "data_offset": 0, 00:11:43.245 "data_size": 0 00:11:43.245 }, 00:11:43.245 { 00:11:43.245 "name": "BaseBdev4", 00:11:43.245 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.245 "is_configured": false, 00:11:43.245 "data_offset": 0, 00:11:43.245 "data_size": 0 00:11:43.245 } 00:11:43.245 ] 00:11:43.245 }' 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.245 12:37:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.814 [2024-12-14 12:37:43.290621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:43.814 [2024-12-14 12:37:43.290749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.814 [2024-12-14 12:37:43.302640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:43.814 [2024-12-14 12:37:43.304686] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:43.814 [2024-12-14 12:37:43.304767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:43.814 [2024-12-14 12:37:43.304796] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:43.814 [2024-12-14 12:37:43.304821] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:43.814 [2024-12-14 12:37:43.304840] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:43.814 [2024-12-14 12:37:43.304861] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.814 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.815 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.815 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.815 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.815 "name": "Existed_Raid", 00:11:43.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.815 "strip_size_kb": 0, 00:11:43.815 "state": "configuring", 00:11:43.815 "raid_level": "raid1", 00:11:43.815 "superblock": false, 00:11:43.815 "num_base_bdevs": 4, 00:11:43.815 "num_base_bdevs_discovered": 1, 00:11:43.815 "num_base_bdevs_operational": 4, 00:11:43.815 "base_bdevs_list": [ 00:11:43.815 { 00:11:43.815 "name": "BaseBdev1", 00:11:43.815 "uuid": "88020de2-1bfa-4355-b32e-2d86d872c674", 00:11:43.815 "is_configured": true, 00:11:43.815 "data_offset": 0, 00:11:43.815 "data_size": 65536 00:11:43.815 }, 00:11:43.815 { 00:11:43.815 "name": "BaseBdev2", 00:11:43.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.815 "is_configured": false, 00:11:43.815 "data_offset": 0, 00:11:43.815 "data_size": 0 00:11:43.815 }, 00:11:43.815 { 00:11:43.815 "name": "BaseBdev3", 00:11:43.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.815 "is_configured": false, 00:11:43.815 "data_offset": 0, 00:11:43.815 "data_size": 0 00:11:43.815 }, 00:11:43.815 { 00:11:43.815 "name": "BaseBdev4", 00:11:43.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.815 "is_configured": false, 00:11:43.815 "data_offset": 0, 00:11:43.815 "data_size": 0 00:11:43.815 } 00:11:43.815 ] 00:11:43.815 }' 00:11:43.815 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.815 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.074 [2024-12-14 12:37:43.773962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.074 BaseBdev2 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.074 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.074 [ 00:11:44.074 { 00:11:44.074 "name": "BaseBdev2", 00:11:44.074 "aliases": [ 00:11:44.074 "91fa6d1b-73c5-43a5-9826-67760218a638" 00:11:44.074 ], 00:11:44.074 "product_name": "Malloc disk", 00:11:44.074 "block_size": 512, 00:11:44.074 "num_blocks": 65536, 00:11:44.074 "uuid": "91fa6d1b-73c5-43a5-9826-67760218a638", 00:11:44.074 "assigned_rate_limits": { 00:11:44.074 "rw_ios_per_sec": 0, 00:11:44.074 "rw_mbytes_per_sec": 0, 00:11:44.074 "r_mbytes_per_sec": 0, 00:11:44.074 "w_mbytes_per_sec": 0 00:11:44.074 }, 00:11:44.074 "claimed": true, 00:11:44.074 "claim_type": "exclusive_write", 00:11:44.074 "zoned": false, 00:11:44.074 "supported_io_types": { 00:11:44.074 "read": true, 00:11:44.074 "write": true, 00:11:44.074 "unmap": true, 00:11:44.074 "flush": true, 00:11:44.074 "reset": true, 00:11:44.074 "nvme_admin": false, 00:11:44.074 "nvme_io": false, 00:11:44.074 "nvme_io_md": false, 00:11:44.074 "write_zeroes": true, 00:11:44.074 "zcopy": true, 00:11:44.074 "get_zone_info": false, 00:11:44.074 "zone_management": false, 00:11:44.074 "zone_append": false, 00:11:44.074 "compare": false, 00:11:44.074 "compare_and_write": false, 00:11:44.074 "abort": true, 00:11:44.074 "seek_hole": false, 00:11:44.074 "seek_data": false, 00:11:44.074 "copy": true, 00:11:44.074 "nvme_iov_md": false 00:11:44.074 }, 00:11:44.074 "memory_domains": [ 00:11:44.074 { 00:11:44.074 "dma_device_id": "system", 00:11:44.074 "dma_device_type": 1 00:11:44.074 }, 00:11:44.074 { 00:11:44.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.334 "dma_device_type": 2 00:11:44.334 } 00:11:44.334 ], 00:11:44.334 "driver_specific": {} 00:11:44.334 } 00:11:44.334 ] 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.334 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.335 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.335 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.335 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.335 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.335 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.335 "name": "Existed_Raid", 00:11:44.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.335 "strip_size_kb": 0, 00:11:44.335 "state": "configuring", 00:11:44.335 "raid_level": "raid1", 00:11:44.335 "superblock": false, 00:11:44.335 "num_base_bdevs": 4, 00:11:44.335 "num_base_bdevs_discovered": 2, 00:11:44.335 "num_base_bdevs_operational": 4, 00:11:44.335 "base_bdevs_list": [ 00:11:44.335 { 00:11:44.335 "name": "BaseBdev1", 00:11:44.335 "uuid": "88020de2-1bfa-4355-b32e-2d86d872c674", 00:11:44.335 "is_configured": true, 00:11:44.335 "data_offset": 0, 00:11:44.335 "data_size": 65536 00:11:44.335 }, 00:11:44.335 { 00:11:44.335 "name": "BaseBdev2", 00:11:44.335 "uuid": "91fa6d1b-73c5-43a5-9826-67760218a638", 00:11:44.335 "is_configured": true, 00:11:44.335 "data_offset": 0, 00:11:44.335 "data_size": 65536 00:11:44.335 }, 00:11:44.335 { 00:11:44.335 "name": "BaseBdev3", 00:11:44.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.335 "is_configured": false, 00:11:44.335 "data_offset": 0, 00:11:44.335 "data_size": 0 00:11:44.335 }, 00:11:44.335 { 00:11:44.335 "name": "BaseBdev4", 00:11:44.335 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.335 "is_configured": false, 00:11:44.335 "data_offset": 0, 00:11:44.335 "data_size": 0 00:11:44.335 } 00:11:44.335 ] 00:11:44.335 }' 00:11:44.335 12:37:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.335 12:37:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.595 [2024-12-14 12:37:44.299415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.595 BaseBdev3 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.595 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.595 [ 00:11:44.595 { 00:11:44.595 "name": "BaseBdev3", 00:11:44.595 "aliases": [ 00:11:44.595 "04709097-39b8-4baf-9888-286c8de89758" 00:11:44.595 ], 00:11:44.595 "product_name": "Malloc disk", 00:11:44.595 "block_size": 512, 00:11:44.595 "num_blocks": 65536, 00:11:44.595 "uuid": "04709097-39b8-4baf-9888-286c8de89758", 00:11:44.595 "assigned_rate_limits": { 00:11:44.595 "rw_ios_per_sec": 0, 00:11:44.595 "rw_mbytes_per_sec": 0, 00:11:44.595 "r_mbytes_per_sec": 0, 00:11:44.595 "w_mbytes_per_sec": 0 00:11:44.595 }, 00:11:44.595 "claimed": true, 00:11:44.595 "claim_type": "exclusive_write", 00:11:44.595 "zoned": false, 00:11:44.595 "supported_io_types": { 00:11:44.595 "read": true, 00:11:44.595 "write": true, 00:11:44.595 "unmap": true, 00:11:44.595 "flush": true, 00:11:44.595 "reset": true, 00:11:44.595 "nvme_admin": false, 00:11:44.595 "nvme_io": false, 00:11:44.595 "nvme_io_md": false, 00:11:44.595 "write_zeroes": true, 00:11:44.595 "zcopy": true, 00:11:44.854 "get_zone_info": false, 00:11:44.854 "zone_management": false, 00:11:44.854 "zone_append": false, 00:11:44.854 "compare": false, 00:11:44.854 "compare_and_write": false, 00:11:44.855 "abort": true, 00:11:44.855 "seek_hole": false, 00:11:44.855 "seek_data": false, 00:11:44.855 "copy": true, 00:11:44.855 "nvme_iov_md": false 00:11:44.855 }, 00:11:44.855 "memory_domains": [ 00:11:44.855 { 00:11:44.855 "dma_device_id": "system", 00:11:44.855 "dma_device_type": 1 00:11:44.855 }, 00:11:44.855 { 00:11:44.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.855 "dma_device_type": 2 00:11:44.855 } 00:11:44.855 ], 00:11:44.855 "driver_specific": {} 00:11:44.855 } 00:11:44.855 ] 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.855 "name": "Existed_Raid", 00:11:44.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.855 "strip_size_kb": 0, 00:11:44.855 "state": "configuring", 00:11:44.855 "raid_level": "raid1", 00:11:44.855 "superblock": false, 00:11:44.855 "num_base_bdevs": 4, 00:11:44.855 "num_base_bdevs_discovered": 3, 00:11:44.855 "num_base_bdevs_operational": 4, 00:11:44.855 "base_bdevs_list": [ 00:11:44.855 { 00:11:44.855 "name": "BaseBdev1", 00:11:44.855 "uuid": "88020de2-1bfa-4355-b32e-2d86d872c674", 00:11:44.855 "is_configured": true, 00:11:44.855 "data_offset": 0, 00:11:44.855 "data_size": 65536 00:11:44.855 }, 00:11:44.855 { 00:11:44.855 "name": "BaseBdev2", 00:11:44.855 "uuid": "91fa6d1b-73c5-43a5-9826-67760218a638", 00:11:44.855 "is_configured": true, 00:11:44.855 "data_offset": 0, 00:11:44.855 "data_size": 65536 00:11:44.855 }, 00:11:44.855 { 00:11:44.855 "name": "BaseBdev3", 00:11:44.855 "uuid": "04709097-39b8-4baf-9888-286c8de89758", 00:11:44.855 "is_configured": true, 00:11:44.855 "data_offset": 0, 00:11:44.855 "data_size": 65536 00:11:44.855 }, 00:11:44.855 { 00:11:44.855 "name": "BaseBdev4", 00:11:44.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:44.855 "is_configured": false, 00:11:44.855 "data_offset": 0, 00:11:44.855 "data_size": 0 00:11:44.855 } 00:11:44.855 ] 00:11:44.855 }' 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.855 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.114 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:45.114 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.114 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.114 [2024-12-14 12:37:44.809656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:45.114 [2024-12-14 12:37:44.809711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:45.115 [2024-12-14 12:37:44.809720] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:45.115 [2024-12-14 12:37:44.809981] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:45.115 [2024-12-14 12:37:44.810244] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:45.115 [2024-12-14 12:37:44.810260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:45.115 [2024-12-14 12:37:44.810535] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.115 BaseBdev4 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.115 [ 00:11:45.115 { 00:11:45.115 "name": "BaseBdev4", 00:11:45.115 "aliases": [ 00:11:45.115 "70473e31-4c99-47d0-aed2-14e26f7b1adc" 00:11:45.115 ], 00:11:45.115 "product_name": "Malloc disk", 00:11:45.115 "block_size": 512, 00:11:45.115 "num_blocks": 65536, 00:11:45.115 "uuid": "70473e31-4c99-47d0-aed2-14e26f7b1adc", 00:11:45.115 "assigned_rate_limits": { 00:11:45.115 "rw_ios_per_sec": 0, 00:11:45.115 "rw_mbytes_per_sec": 0, 00:11:45.115 "r_mbytes_per_sec": 0, 00:11:45.115 "w_mbytes_per_sec": 0 00:11:45.115 }, 00:11:45.115 "claimed": true, 00:11:45.115 "claim_type": "exclusive_write", 00:11:45.115 "zoned": false, 00:11:45.115 "supported_io_types": { 00:11:45.115 "read": true, 00:11:45.115 "write": true, 00:11:45.115 "unmap": true, 00:11:45.115 "flush": true, 00:11:45.115 "reset": true, 00:11:45.115 "nvme_admin": false, 00:11:45.115 "nvme_io": false, 00:11:45.115 "nvme_io_md": false, 00:11:45.115 "write_zeroes": true, 00:11:45.115 "zcopy": true, 00:11:45.115 "get_zone_info": false, 00:11:45.115 "zone_management": false, 00:11:45.115 "zone_append": false, 00:11:45.115 "compare": false, 00:11:45.115 "compare_and_write": false, 00:11:45.115 "abort": true, 00:11:45.115 "seek_hole": false, 00:11:45.115 "seek_data": false, 00:11:45.115 "copy": true, 00:11:45.115 "nvme_iov_md": false 00:11:45.115 }, 00:11:45.115 "memory_domains": [ 00:11:45.115 { 00:11:45.115 "dma_device_id": "system", 00:11:45.115 "dma_device_type": 1 00:11:45.115 }, 00:11:45.115 { 00:11:45.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.115 "dma_device_type": 2 00:11:45.115 } 00:11:45.115 ], 00:11:45.115 "driver_specific": {} 00:11:45.115 } 00:11:45.115 ] 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:45.115 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.374 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.375 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.375 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.375 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.375 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.375 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.375 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.375 "name": "Existed_Raid", 00:11:45.375 "uuid": "8d2476fd-c0c5-415f-992c-1bf1b778c217", 00:11:45.375 "strip_size_kb": 0, 00:11:45.375 "state": "online", 00:11:45.375 "raid_level": "raid1", 00:11:45.375 "superblock": false, 00:11:45.375 "num_base_bdevs": 4, 00:11:45.375 "num_base_bdevs_discovered": 4, 00:11:45.375 "num_base_bdevs_operational": 4, 00:11:45.375 "base_bdevs_list": [ 00:11:45.375 { 00:11:45.375 "name": "BaseBdev1", 00:11:45.375 "uuid": "88020de2-1bfa-4355-b32e-2d86d872c674", 00:11:45.375 "is_configured": true, 00:11:45.375 "data_offset": 0, 00:11:45.375 "data_size": 65536 00:11:45.375 }, 00:11:45.375 { 00:11:45.375 "name": "BaseBdev2", 00:11:45.375 "uuid": "91fa6d1b-73c5-43a5-9826-67760218a638", 00:11:45.375 "is_configured": true, 00:11:45.375 "data_offset": 0, 00:11:45.375 "data_size": 65536 00:11:45.375 }, 00:11:45.375 { 00:11:45.375 "name": "BaseBdev3", 00:11:45.375 "uuid": "04709097-39b8-4baf-9888-286c8de89758", 00:11:45.375 "is_configured": true, 00:11:45.375 "data_offset": 0, 00:11:45.375 "data_size": 65536 00:11:45.375 }, 00:11:45.375 { 00:11:45.375 "name": "BaseBdev4", 00:11:45.375 "uuid": "70473e31-4c99-47d0-aed2-14e26f7b1adc", 00:11:45.375 "is_configured": true, 00:11:45.375 "data_offset": 0, 00:11:45.375 "data_size": 65536 00:11:45.375 } 00:11:45.375 ] 00:11:45.375 }' 00:11:45.375 12:37:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.375 12:37:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:45.634 [2024-12-14 12:37:45.273321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.634 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:45.634 "name": "Existed_Raid", 00:11:45.634 "aliases": [ 00:11:45.634 "8d2476fd-c0c5-415f-992c-1bf1b778c217" 00:11:45.634 ], 00:11:45.634 "product_name": "Raid Volume", 00:11:45.634 "block_size": 512, 00:11:45.634 "num_blocks": 65536, 00:11:45.634 "uuid": "8d2476fd-c0c5-415f-992c-1bf1b778c217", 00:11:45.634 "assigned_rate_limits": { 00:11:45.634 "rw_ios_per_sec": 0, 00:11:45.634 "rw_mbytes_per_sec": 0, 00:11:45.634 "r_mbytes_per_sec": 0, 00:11:45.634 "w_mbytes_per_sec": 0 00:11:45.634 }, 00:11:45.634 "claimed": false, 00:11:45.634 "zoned": false, 00:11:45.634 "supported_io_types": { 00:11:45.634 "read": true, 00:11:45.634 "write": true, 00:11:45.634 "unmap": false, 00:11:45.634 "flush": false, 00:11:45.634 "reset": true, 00:11:45.634 "nvme_admin": false, 00:11:45.634 "nvme_io": false, 00:11:45.634 "nvme_io_md": false, 00:11:45.634 "write_zeroes": true, 00:11:45.634 "zcopy": false, 00:11:45.634 "get_zone_info": false, 00:11:45.634 "zone_management": false, 00:11:45.634 "zone_append": false, 00:11:45.634 "compare": false, 00:11:45.634 "compare_and_write": false, 00:11:45.634 "abort": false, 00:11:45.634 "seek_hole": false, 00:11:45.634 "seek_data": false, 00:11:45.634 "copy": false, 00:11:45.634 "nvme_iov_md": false 00:11:45.634 }, 00:11:45.634 "memory_domains": [ 00:11:45.634 { 00:11:45.634 "dma_device_id": "system", 00:11:45.634 "dma_device_type": 1 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.634 "dma_device_type": 2 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "dma_device_id": "system", 00:11:45.634 "dma_device_type": 1 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.634 "dma_device_type": 2 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "dma_device_id": "system", 00:11:45.634 "dma_device_type": 1 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.634 "dma_device_type": 2 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "dma_device_id": "system", 00:11:45.634 "dma_device_type": 1 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.634 "dma_device_type": 2 00:11:45.634 } 00:11:45.634 ], 00:11:45.634 "driver_specific": { 00:11:45.634 "raid": { 00:11:45.634 "uuid": "8d2476fd-c0c5-415f-992c-1bf1b778c217", 00:11:45.634 "strip_size_kb": 0, 00:11:45.634 "state": "online", 00:11:45.634 "raid_level": "raid1", 00:11:45.634 "superblock": false, 00:11:45.634 "num_base_bdevs": 4, 00:11:45.634 "num_base_bdevs_discovered": 4, 00:11:45.634 "num_base_bdevs_operational": 4, 00:11:45.634 "base_bdevs_list": [ 00:11:45.634 { 00:11:45.634 "name": "BaseBdev1", 00:11:45.634 "uuid": "88020de2-1bfa-4355-b32e-2d86d872c674", 00:11:45.634 "is_configured": true, 00:11:45.634 "data_offset": 0, 00:11:45.634 "data_size": 65536 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "name": "BaseBdev2", 00:11:45.634 "uuid": "91fa6d1b-73c5-43a5-9826-67760218a638", 00:11:45.634 "is_configured": true, 00:11:45.634 "data_offset": 0, 00:11:45.634 "data_size": 65536 00:11:45.634 }, 00:11:45.634 { 00:11:45.634 "name": "BaseBdev3", 00:11:45.634 "uuid": "04709097-39b8-4baf-9888-286c8de89758", 00:11:45.634 "is_configured": true, 00:11:45.634 "data_offset": 0, 00:11:45.634 "data_size": 65536 00:11:45.634 }, 00:11:45.635 { 00:11:45.635 "name": "BaseBdev4", 00:11:45.635 "uuid": "70473e31-4c99-47d0-aed2-14e26f7b1adc", 00:11:45.635 "is_configured": true, 00:11:45.635 "data_offset": 0, 00:11:45.635 "data_size": 65536 00:11:45.635 } 00:11:45.635 ] 00:11:45.635 } 00:11:45.635 } 00:11:45.635 }' 00:11:45.635 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.635 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:45.635 BaseBdev2 00:11:45.635 BaseBdev3 00:11:45.635 BaseBdev4' 00:11:45.635 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.894 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.894 [2024-12-14 12:37:45.576448] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.154 "name": "Existed_Raid", 00:11:46.154 "uuid": "8d2476fd-c0c5-415f-992c-1bf1b778c217", 00:11:46.154 "strip_size_kb": 0, 00:11:46.154 "state": "online", 00:11:46.154 "raid_level": "raid1", 00:11:46.154 "superblock": false, 00:11:46.154 "num_base_bdevs": 4, 00:11:46.154 "num_base_bdevs_discovered": 3, 00:11:46.154 "num_base_bdevs_operational": 3, 00:11:46.154 "base_bdevs_list": [ 00:11:46.154 { 00:11:46.154 "name": null, 00:11:46.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.154 "is_configured": false, 00:11:46.154 "data_offset": 0, 00:11:46.154 "data_size": 65536 00:11:46.154 }, 00:11:46.154 { 00:11:46.154 "name": "BaseBdev2", 00:11:46.154 "uuid": "91fa6d1b-73c5-43a5-9826-67760218a638", 00:11:46.154 "is_configured": true, 00:11:46.154 "data_offset": 0, 00:11:46.154 "data_size": 65536 00:11:46.154 }, 00:11:46.154 { 00:11:46.154 "name": "BaseBdev3", 00:11:46.154 "uuid": "04709097-39b8-4baf-9888-286c8de89758", 00:11:46.154 "is_configured": true, 00:11:46.154 "data_offset": 0, 00:11:46.154 "data_size": 65536 00:11:46.154 }, 00:11:46.154 { 00:11:46.154 "name": "BaseBdev4", 00:11:46.154 "uuid": "70473e31-4c99-47d0-aed2-14e26f7b1adc", 00:11:46.154 "is_configured": true, 00:11:46.154 "data_offset": 0, 00:11:46.154 "data_size": 65536 00:11:46.154 } 00:11:46.154 ] 00:11:46.154 }' 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.154 12:37:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.414 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.674 [2024-12-14 12:37:46.156006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.674 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.674 [2024-12-14 12:37:46.309324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.934 [2024-12-14 12:37:46.467002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:46.934 [2024-12-14 12:37:46.467123] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.934 [2024-12-14 12:37:46.566128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.934 [2024-12-14 12:37:46.566192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.934 [2024-12-14 12:37:46.566207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.934 BaseBdev2 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.934 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.195 [ 00:11:47.195 { 00:11:47.195 "name": "BaseBdev2", 00:11:47.195 "aliases": [ 00:11:47.195 "1b3f6990-9076-487a-99bd-d0782fd6192f" 00:11:47.195 ], 00:11:47.195 "product_name": "Malloc disk", 00:11:47.195 "block_size": 512, 00:11:47.195 "num_blocks": 65536, 00:11:47.195 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:47.195 "assigned_rate_limits": { 00:11:47.195 "rw_ios_per_sec": 0, 00:11:47.195 "rw_mbytes_per_sec": 0, 00:11:47.195 "r_mbytes_per_sec": 0, 00:11:47.195 "w_mbytes_per_sec": 0 00:11:47.195 }, 00:11:47.195 "claimed": false, 00:11:47.195 "zoned": false, 00:11:47.195 "supported_io_types": { 00:11:47.195 "read": true, 00:11:47.195 "write": true, 00:11:47.195 "unmap": true, 00:11:47.195 "flush": true, 00:11:47.195 "reset": true, 00:11:47.195 "nvme_admin": false, 00:11:47.195 "nvme_io": false, 00:11:47.195 "nvme_io_md": false, 00:11:47.195 "write_zeroes": true, 00:11:47.195 "zcopy": true, 00:11:47.195 "get_zone_info": false, 00:11:47.195 "zone_management": false, 00:11:47.195 "zone_append": false, 00:11:47.195 "compare": false, 00:11:47.195 "compare_and_write": false, 00:11:47.195 "abort": true, 00:11:47.195 "seek_hole": false, 00:11:47.195 "seek_data": false, 00:11:47.195 "copy": true, 00:11:47.195 "nvme_iov_md": false 00:11:47.195 }, 00:11:47.195 "memory_domains": [ 00:11:47.195 { 00:11:47.195 "dma_device_id": "system", 00:11:47.195 "dma_device_type": 1 00:11:47.195 }, 00:11:47.195 { 00:11:47.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.195 "dma_device_type": 2 00:11:47.195 } 00:11:47.195 ], 00:11:47.195 "driver_specific": {} 00:11:47.195 } 00:11:47.195 ] 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.195 BaseBdev3 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.195 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.195 [ 00:11:47.195 { 00:11:47.195 "name": "BaseBdev3", 00:11:47.195 "aliases": [ 00:11:47.195 "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23" 00:11:47.195 ], 00:11:47.195 "product_name": "Malloc disk", 00:11:47.195 "block_size": 512, 00:11:47.195 "num_blocks": 65536, 00:11:47.195 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:47.195 "assigned_rate_limits": { 00:11:47.195 "rw_ios_per_sec": 0, 00:11:47.195 "rw_mbytes_per_sec": 0, 00:11:47.195 "r_mbytes_per_sec": 0, 00:11:47.195 "w_mbytes_per_sec": 0 00:11:47.195 }, 00:11:47.195 "claimed": false, 00:11:47.195 "zoned": false, 00:11:47.195 "supported_io_types": { 00:11:47.195 "read": true, 00:11:47.195 "write": true, 00:11:47.195 "unmap": true, 00:11:47.195 "flush": true, 00:11:47.195 "reset": true, 00:11:47.195 "nvme_admin": false, 00:11:47.195 "nvme_io": false, 00:11:47.195 "nvme_io_md": false, 00:11:47.195 "write_zeroes": true, 00:11:47.195 "zcopy": true, 00:11:47.195 "get_zone_info": false, 00:11:47.195 "zone_management": false, 00:11:47.195 "zone_append": false, 00:11:47.195 "compare": false, 00:11:47.195 "compare_and_write": false, 00:11:47.195 "abort": true, 00:11:47.195 "seek_hole": false, 00:11:47.195 "seek_data": false, 00:11:47.195 "copy": true, 00:11:47.195 "nvme_iov_md": false 00:11:47.196 }, 00:11:47.196 "memory_domains": [ 00:11:47.196 { 00:11:47.196 "dma_device_id": "system", 00:11:47.196 "dma_device_type": 1 00:11:47.196 }, 00:11:47.196 { 00:11:47.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.196 "dma_device_type": 2 00:11:47.196 } 00:11:47.196 ], 00:11:47.196 "driver_specific": {} 00:11:47.196 } 00:11:47.196 ] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.196 BaseBdev4 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.196 [ 00:11:47.196 { 00:11:47.196 "name": "BaseBdev4", 00:11:47.196 "aliases": [ 00:11:47.196 "791051ed-d0ca-4c7a-915c-ea916a37b1d0" 00:11:47.196 ], 00:11:47.196 "product_name": "Malloc disk", 00:11:47.196 "block_size": 512, 00:11:47.196 "num_blocks": 65536, 00:11:47.196 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:47.196 "assigned_rate_limits": { 00:11:47.196 "rw_ios_per_sec": 0, 00:11:47.196 "rw_mbytes_per_sec": 0, 00:11:47.196 "r_mbytes_per_sec": 0, 00:11:47.196 "w_mbytes_per_sec": 0 00:11:47.196 }, 00:11:47.196 "claimed": false, 00:11:47.196 "zoned": false, 00:11:47.196 "supported_io_types": { 00:11:47.196 "read": true, 00:11:47.196 "write": true, 00:11:47.196 "unmap": true, 00:11:47.196 "flush": true, 00:11:47.196 "reset": true, 00:11:47.196 "nvme_admin": false, 00:11:47.196 "nvme_io": false, 00:11:47.196 "nvme_io_md": false, 00:11:47.196 "write_zeroes": true, 00:11:47.196 "zcopy": true, 00:11:47.196 "get_zone_info": false, 00:11:47.196 "zone_management": false, 00:11:47.196 "zone_append": false, 00:11:47.196 "compare": false, 00:11:47.196 "compare_and_write": false, 00:11:47.196 "abort": true, 00:11:47.196 "seek_hole": false, 00:11:47.196 "seek_data": false, 00:11:47.196 "copy": true, 00:11:47.196 "nvme_iov_md": false 00:11:47.196 }, 00:11:47.196 "memory_domains": [ 00:11:47.196 { 00:11:47.196 "dma_device_id": "system", 00:11:47.196 "dma_device_type": 1 00:11:47.196 }, 00:11:47.196 { 00:11:47.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.196 "dma_device_type": 2 00:11:47.196 } 00:11:47.196 ], 00:11:47.196 "driver_specific": {} 00:11:47.196 } 00:11:47.196 ] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.196 [2024-12-14 12:37:46.869503] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:47.196 [2024-12-14 12:37:46.869550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:47.196 [2024-12-14 12:37:46.869571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:47.196 [2024-12-14 12:37:46.871563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:47.196 [2024-12-14 12:37:46.871617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.196 "name": "Existed_Raid", 00:11:47.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.196 "strip_size_kb": 0, 00:11:47.196 "state": "configuring", 00:11:47.196 "raid_level": "raid1", 00:11:47.196 "superblock": false, 00:11:47.196 "num_base_bdevs": 4, 00:11:47.196 "num_base_bdevs_discovered": 3, 00:11:47.196 "num_base_bdevs_operational": 4, 00:11:47.196 "base_bdevs_list": [ 00:11:47.196 { 00:11:47.196 "name": "BaseBdev1", 00:11:47.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.196 "is_configured": false, 00:11:47.196 "data_offset": 0, 00:11:47.196 "data_size": 0 00:11:47.196 }, 00:11:47.196 { 00:11:47.196 "name": "BaseBdev2", 00:11:47.196 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:47.196 "is_configured": true, 00:11:47.196 "data_offset": 0, 00:11:47.196 "data_size": 65536 00:11:47.196 }, 00:11:47.196 { 00:11:47.196 "name": "BaseBdev3", 00:11:47.196 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:47.196 "is_configured": true, 00:11:47.196 "data_offset": 0, 00:11:47.196 "data_size": 65536 00:11:47.196 }, 00:11:47.196 { 00:11:47.196 "name": "BaseBdev4", 00:11:47.196 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:47.196 "is_configured": true, 00:11:47.196 "data_offset": 0, 00:11:47.196 "data_size": 65536 00:11:47.196 } 00:11:47.196 ] 00:11:47.196 }' 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.196 12:37:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.766 [2024-12-14 12:37:47.304833] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.766 "name": "Existed_Raid", 00:11:47.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.766 "strip_size_kb": 0, 00:11:47.766 "state": "configuring", 00:11:47.766 "raid_level": "raid1", 00:11:47.766 "superblock": false, 00:11:47.766 "num_base_bdevs": 4, 00:11:47.766 "num_base_bdevs_discovered": 2, 00:11:47.766 "num_base_bdevs_operational": 4, 00:11:47.766 "base_bdevs_list": [ 00:11:47.766 { 00:11:47.766 "name": "BaseBdev1", 00:11:47.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.766 "is_configured": false, 00:11:47.766 "data_offset": 0, 00:11:47.766 "data_size": 0 00:11:47.766 }, 00:11:47.766 { 00:11:47.766 "name": null, 00:11:47.766 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:47.766 "is_configured": false, 00:11:47.766 "data_offset": 0, 00:11:47.766 "data_size": 65536 00:11:47.766 }, 00:11:47.766 { 00:11:47.766 "name": "BaseBdev3", 00:11:47.766 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:47.766 "is_configured": true, 00:11:47.766 "data_offset": 0, 00:11:47.766 "data_size": 65536 00:11:47.766 }, 00:11:47.766 { 00:11:47.766 "name": "BaseBdev4", 00:11:47.766 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:47.766 "is_configured": true, 00:11:47.766 "data_offset": 0, 00:11:47.766 "data_size": 65536 00:11:47.766 } 00:11:47.766 ] 00:11:47.766 }' 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.766 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.026 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.026 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:48.026 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.026 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.285 [2024-12-14 12:37:47.847889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:48.285 BaseBdev1 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.285 [ 00:11:48.285 { 00:11:48.285 "name": "BaseBdev1", 00:11:48.285 "aliases": [ 00:11:48.285 "7842f744-c18c-4871-82ee-77da7b54a1b3" 00:11:48.285 ], 00:11:48.285 "product_name": "Malloc disk", 00:11:48.285 "block_size": 512, 00:11:48.285 "num_blocks": 65536, 00:11:48.285 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:48.285 "assigned_rate_limits": { 00:11:48.285 "rw_ios_per_sec": 0, 00:11:48.285 "rw_mbytes_per_sec": 0, 00:11:48.285 "r_mbytes_per_sec": 0, 00:11:48.285 "w_mbytes_per_sec": 0 00:11:48.285 }, 00:11:48.285 "claimed": true, 00:11:48.285 "claim_type": "exclusive_write", 00:11:48.285 "zoned": false, 00:11:48.285 "supported_io_types": { 00:11:48.285 "read": true, 00:11:48.285 "write": true, 00:11:48.285 "unmap": true, 00:11:48.285 "flush": true, 00:11:48.285 "reset": true, 00:11:48.285 "nvme_admin": false, 00:11:48.285 "nvme_io": false, 00:11:48.285 "nvme_io_md": false, 00:11:48.285 "write_zeroes": true, 00:11:48.285 "zcopy": true, 00:11:48.285 "get_zone_info": false, 00:11:48.285 "zone_management": false, 00:11:48.285 "zone_append": false, 00:11:48.285 "compare": false, 00:11:48.285 "compare_and_write": false, 00:11:48.285 "abort": true, 00:11:48.285 "seek_hole": false, 00:11:48.285 "seek_data": false, 00:11:48.285 "copy": true, 00:11:48.285 "nvme_iov_md": false 00:11:48.285 }, 00:11:48.285 "memory_domains": [ 00:11:48.285 { 00:11:48.285 "dma_device_id": "system", 00:11:48.285 "dma_device_type": 1 00:11:48.285 }, 00:11:48.285 { 00:11:48.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.285 "dma_device_type": 2 00:11:48.285 } 00:11:48.285 ], 00:11:48.285 "driver_specific": {} 00:11:48.285 } 00:11:48.285 ] 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.285 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.285 "name": "Existed_Raid", 00:11:48.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.286 "strip_size_kb": 0, 00:11:48.286 "state": "configuring", 00:11:48.286 "raid_level": "raid1", 00:11:48.286 "superblock": false, 00:11:48.286 "num_base_bdevs": 4, 00:11:48.286 "num_base_bdevs_discovered": 3, 00:11:48.286 "num_base_bdevs_operational": 4, 00:11:48.286 "base_bdevs_list": [ 00:11:48.286 { 00:11:48.286 "name": "BaseBdev1", 00:11:48.286 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:48.286 "is_configured": true, 00:11:48.286 "data_offset": 0, 00:11:48.286 "data_size": 65536 00:11:48.286 }, 00:11:48.286 { 00:11:48.286 "name": null, 00:11:48.286 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:48.286 "is_configured": false, 00:11:48.286 "data_offset": 0, 00:11:48.286 "data_size": 65536 00:11:48.286 }, 00:11:48.286 { 00:11:48.286 "name": "BaseBdev3", 00:11:48.286 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:48.286 "is_configured": true, 00:11:48.286 "data_offset": 0, 00:11:48.286 "data_size": 65536 00:11:48.286 }, 00:11:48.286 { 00:11:48.286 "name": "BaseBdev4", 00:11:48.286 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:48.286 "is_configured": true, 00:11:48.286 "data_offset": 0, 00:11:48.286 "data_size": 65536 00:11:48.286 } 00:11:48.286 ] 00:11:48.286 }' 00:11:48.286 12:37:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.286 12:37:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 [2024-12-14 12:37:48.363142] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.854 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.854 "name": "Existed_Raid", 00:11:48.854 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.854 "strip_size_kb": 0, 00:11:48.854 "state": "configuring", 00:11:48.854 "raid_level": "raid1", 00:11:48.854 "superblock": false, 00:11:48.854 "num_base_bdevs": 4, 00:11:48.854 "num_base_bdevs_discovered": 2, 00:11:48.854 "num_base_bdevs_operational": 4, 00:11:48.854 "base_bdevs_list": [ 00:11:48.854 { 00:11:48.854 "name": "BaseBdev1", 00:11:48.854 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:48.854 "is_configured": true, 00:11:48.854 "data_offset": 0, 00:11:48.854 "data_size": 65536 00:11:48.855 }, 00:11:48.855 { 00:11:48.855 "name": null, 00:11:48.855 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:48.855 "is_configured": false, 00:11:48.855 "data_offset": 0, 00:11:48.855 "data_size": 65536 00:11:48.855 }, 00:11:48.855 { 00:11:48.855 "name": null, 00:11:48.855 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:48.855 "is_configured": false, 00:11:48.855 "data_offset": 0, 00:11:48.855 "data_size": 65536 00:11:48.855 }, 00:11:48.855 { 00:11:48.855 "name": "BaseBdev4", 00:11:48.855 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:48.855 "is_configured": true, 00:11:48.855 "data_offset": 0, 00:11:48.855 "data_size": 65536 00:11:48.855 } 00:11:48.855 ] 00:11:48.855 }' 00:11:48.855 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.855 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.113 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.114 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.114 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.114 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.114 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.114 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:49.114 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:49.114 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.114 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.114 [2024-12-14 12:37:48.846354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.386 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.386 "name": "Existed_Raid", 00:11:49.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.386 "strip_size_kb": 0, 00:11:49.386 "state": "configuring", 00:11:49.386 "raid_level": "raid1", 00:11:49.386 "superblock": false, 00:11:49.386 "num_base_bdevs": 4, 00:11:49.386 "num_base_bdevs_discovered": 3, 00:11:49.386 "num_base_bdevs_operational": 4, 00:11:49.386 "base_bdevs_list": [ 00:11:49.386 { 00:11:49.386 "name": "BaseBdev1", 00:11:49.386 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:49.386 "is_configured": true, 00:11:49.387 "data_offset": 0, 00:11:49.387 "data_size": 65536 00:11:49.387 }, 00:11:49.387 { 00:11:49.387 "name": null, 00:11:49.387 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:49.387 "is_configured": false, 00:11:49.387 "data_offset": 0, 00:11:49.387 "data_size": 65536 00:11:49.387 }, 00:11:49.387 { 00:11:49.387 "name": "BaseBdev3", 00:11:49.387 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:49.387 "is_configured": true, 00:11:49.387 "data_offset": 0, 00:11:49.387 "data_size": 65536 00:11:49.387 }, 00:11:49.387 { 00:11:49.387 "name": "BaseBdev4", 00:11:49.387 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:49.387 "is_configured": true, 00:11:49.387 "data_offset": 0, 00:11:49.387 "data_size": 65536 00:11:49.387 } 00:11:49.387 ] 00:11:49.387 }' 00:11:49.387 12:37:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.387 12:37:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.667 [2024-12-14 12:37:49.293622] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.667 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.926 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.926 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.926 "name": "Existed_Raid", 00:11:49.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.926 "strip_size_kb": 0, 00:11:49.926 "state": "configuring", 00:11:49.926 "raid_level": "raid1", 00:11:49.926 "superblock": false, 00:11:49.926 "num_base_bdevs": 4, 00:11:49.926 "num_base_bdevs_discovered": 2, 00:11:49.926 "num_base_bdevs_operational": 4, 00:11:49.926 "base_bdevs_list": [ 00:11:49.926 { 00:11:49.926 "name": null, 00:11:49.926 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:49.926 "is_configured": false, 00:11:49.926 "data_offset": 0, 00:11:49.926 "data_size": 65536 00:11:49.926 }, 00:11:49.926 { 00:11:49.926 "name": null, 00:11:49.926 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:49.926 "is_configured": false, 00:11:49.926 "data_offset": 0, 00:11:49.926 "data_size": 65536 00:11:49.926 }, 00:11:49.926 { 00:11:49.926 "name": "BaseBdev3", 00:11:49.926 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:49.926 "is_configured": true, 00:11:49.926 "data_offset": 0, 00:11:49.926 "data_size": 65536 00:11:49.926 }, 00:11:49.926 { 00:11:49.926 "name": "BaseBdev4", 00:11:49.926 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:49.926 "is_configured": true, 00:11:49.926 "data_offset": 0, 00:11:49.926 "data_size": 65536 00:11:49.926 } 00:11:49.926 ] 00:11:49.926 }' 00:11:49.926 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.926 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.186 [2024-12-14 12:37:49.878192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.186 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.445 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.445 "name": "Existed_Raid", 00:11:50.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.445 "strip_size_kb": 0, 00:11:50.445 "state": "configuring", 00:11:50.445 "raid_level": "raid1", 00:11:50.445 "superblock": false, 00:11:50.445 "num_base_bdevs": 4, 00:11:50.445 "num_base_bdevs_discovered": 3, 00:11:50.445 "num_base_bdevs_operational": 4, 00:11:50.445 "base_bdevs_list": [ 00:11:50.445 { 00:11:50.445 "name": null, 00:11:50.445 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:50.445 "is_configured": false, 00:11:50.445 "data_offset": 0, 00:11:50.445 "data_size": 65536 00:11:50.445 }, 00:11:50.445 { 00:11:50.445 "name": "BaseBdev2", 00:11:50.445 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:50.445 "is_configured": true, 00:11:50.445 "data_offset": 0, 00:11:50.445 "data_size": 65536 00:11:50.445 }, 00:11:50.445 { 00:11:50.445 "name": "BaseBdev3", 00:11:50.445 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:50.445 "is_configured": true, 00:11:50.445 "data_offset": 0, 00:11:50.445 "data_size": 65536 00:11:50.445 }, 00:11:50.445 { 00:11:50.445 "name": "BaseBdev4", 00:11:50.445 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:50.445 "is_configured": true, 00:11:50.445 "data_offset": 0, 00:11:50.445 "data_size": 65536 00:11:50.445 } 00:11:50.445 ] 00:11:50.445 }' 00:11:50.445 12:37:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.445 12:37:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7842f744-c18c-4871-82ee-77da7b54a1b3 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.704 [2024-12-14 12:37:50.427710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:50.704 [2024-12-14 12:37:50.427768] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:50.704 [2024-12-14 12:37:50.427779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:50.704 [2024-12-14 12:37:50.428042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:50.704 [2024-12-14 12:37:50.428239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:50.704 [2024-12-14 12:37:50.428260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:50.704 [2024-12-14 12:37:50.428514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.704 NewBaseBdev 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.704 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.964 [ 00:11:50.964 { 00:11:50.964 "name": "NewBaseBdev", 00:11:50.964 "aliases": [ 00:11:50.964 "7842f744-c18c-4871-82ee-77da7b54a1b3" 00:11:50.964 ], 00:11:50.964 "product_name": "Malloc disk", 00:11:50.964 "block_size": 512, 00:11:50.964 "num_blocks": 65536, 00:11:50.964 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:50.964 "assigned_rate_limits": { 00:11:50.964 "rw_ios_per_sec": 0, 00:11:50.964 "rw_mbytes_per_sec": 0, 00:11:50.964 "r_mbytes_per_sec": 0, 00:11:50.964 "w_mbytes_per_sec": 0 00:11:50.964 }, 00:11:50.964 "claimed": true, 00:11:50.964 "claim_type": "exclusive_write", 00:11:50.964 "zoned": false, 00:11:50.964 "supported_io_types": { 00:11:50.964 "read": true, 00:11:50.964 "write": true, 00:11:50.964 "unmap": true, 00:11:50.964 "flush": true, 00:11:50.964 "reset": true, 00:11:50.964 "nvme_admin": false, 00:11:50.964 "nvme_io": false, 00:11:50.964 "nvme_io_md": false, 00:11:50.964 "write_zeroes": true, 00:11:50.964 "zcopy": true, 00:11:50.964 "get_zone_info": false, 00:11:50.964 "zone_management": false, 00:11:50.964 "zone_append": false, 00:11:50.964 "compare": false, 00:11:50.964 "compare_and_write": false, 00:11:50.964 "abort": true, 00:11:50.964 "seek_hole": false, 00:11:50.964 "seek_data": false, 00:11:50.964 "copy": true, 00:11:50.964 "nvme_iov_md": false 00:11:50.964 }, 00:11:50.964 "memory_domains": [ 00:11:50.964 { 00:11:50.964 "dma_device_id": "system", 00:11:50.964 "dma_device_type": 1 00:11:50.964 }, 00:11:50.964 { 00:11:50.964 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.964 "dma_device_type": 2 00:11:50.964 } 00:11:50.964 ], 00:11:50.964 "driver_specific": {} 00:11:50.964 } 00:11:50.964 ] 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.964 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.964 "name": "Existed_Raid", 00:11:50.964 "uuid": "7cc6f753-ee8b-42ca-bbcb-7155bd7a3aa7", 00:11:50.964 "strip_size_kb": 0, 00:11:50.964 "state": "online", 00:11:50.964 "raid_level": "raid1", 00:11:50.964 "superblock": false, 00:11:50.964 "num_base_bdevs": 4, 00:11:50.964 "num_base_bdevs_discovered": 4, 00:11:50.964 "num_base_bdevs_operational": 4, 00:11:50.965 "base_bdevs_list": [ 00:11:50.965 { 00:11:50.965 "name": "NewBaseBdev", 00:11:50.965 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:50.965 "is_configured": true, 00:11:50.965 "data_offset": 0, 00:11:50.965 "data_size": 65536 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "name": "BaseBdev2", 00:11:50.965 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:50.965 "is_configured": true, 00:11:50.965 "data_offset": 0, 00:11:50.965 "data_size": 65536 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "name": "BaseBdev3", 00:11:50.965 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:50.965 "is_configured": true, 00:11:50.965 "data_offset": 0, 00:11:50.965 "data_size": 65536 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "name": "BaseBdev4", 00:11:50.965 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:50.965 "is_configured": true, 00:11:50.965 "data_offset": 0, 00:11:50.965 "data_size": 65536 00:11:50.965 } 00:11:50.965 ] 00:11:50.965 }' 00:11:50.965 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.965 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.233 [2024-12-14 12:37:50.911434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.233 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.233 "name": "Existed_Raid", 00:11:51.233 "aliases": [ 00:11:51.233 "7cc6f753-ee8b-42ca-bbcb-7155bd7a3aa7" 00:11:51.233 ], 00:11:51.233 "product_name": "Raid Volume", 00:11:51.233 "block_size": 512, 00:11:51.233 "num_blocks": 65536, 00:11:51.233 "uuid": "7cc6f753-ee8b-42ca-bbcb-7155bd7a3aa7", 00:11:51.233 "assigned_rate_limits": { 00:11:51.233 "rw_ios_per_sec": 0, 00:11:51.233 "rw_mbytes_per_sec": 0, 00:11:51.233 "r_mbytes_per_sec": 0, 00:11:51.233 "w_mbytes_per_sec": 0 00:11:51.233 }, 00:11:51.233 "claimed": false, 00:11:51.233 "zoned": false, 00:11:51.233 "supported_io_types": { 00:11:51.233 "read": true, 00:11:51.233 "write": true, 00:11:51.233 "unmap": false, 00:11:51.233 "flush": false, 00:11:51.233 "reset": true, 00:11:51.233 "nvme_admin": false, 00:11:51.233 "nvme_io": false, 00:11:51.233 "nvme_io_md": false, 00:11:51.233 "write_zeroes": true, 00:11:51.233 "zcopy": false, 00:11:51.234 "get_zone_info": false, 00:11:51.234 "zone_management": false, 00:11:51.234 "zone_append": false, 00:11:51.234 "compare": false, 00:11:51.234 "compare_and_write": false, 00:11:51.234 "abort": false, 00:11:51.234 "seek_hole": false, 00:11:51.234 "seek_data": false, 00:11:51.234 "copy": false, 00:11:51.234 "nvme_iov_md": false 00:11:51.234 }, 00:11:51.234 "memory_domains": [ 00:11:51.234 { 00:11:51.234 "dma_device_id": "system", 00:11:51.234 "dma_device_type": 1 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.234 "dma_device_type": 2 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "dma_device_id": "system", 00:11:51.234 "dma_device_type": 1 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.234 "dma_device_type": 2 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "dma_device_id": "system", 00:11:51.234 "dma_device_type": 1 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.234 "dma_device_type": 2 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "dma_device_id": "system", 00:11:51.234 "dma_device_type": 1 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.234 "dma_device_type": 2 00:11:51.234 } 00:11:51.234 ], 00:11:51.234 "driver_specific": { 00:11:51.234 "raid": { 00:11:51.234 "uuid": "7cc6f753-ee8b-42ca-bbcb-7155bd7a3aa7", 00:11:51.234 "strip_size_kb": 0, 00:11:51.234 "state": "online", 00:11:51.234 "raid_level": "raid1", 00:11:51.234 "superblock": false, 00:11:51.234 "num_base_bdevs": 4, 00:11:51.234 "num_base_bdevs_discovered": 4, 00:11:51.234 "num_base_bdevs_operational": 4, 00:11:51.234 "base_bdevs_list": [ 00:11:51.234 { 00:11:51.234 "name": "NewBaseBdev", 00:11:51.234 "uuid": "7842f744-c18c-4871-82ee-77da7b54a1b3", 00:11:51.234 "is_configured": true, 00:11:51.234 "data_offset": 0, 00:11:51.234 "data_size": 65536 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "name": "BaseBdev2", 00:11:51.234 "uuid": "1b3f6990-9076-487a-99bd-d0782fd6192f", 00:11:51.234 "is_configured": true, 00:11:51.234 "data_offset": 0, 00:11:51.234 "data_size": 65536 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "name": "BaseBdev3", 00:11:51.234 "uuid": "9fdfe8ce-aaf3-4fae-9c18-adf0437edc23", 00:11:51.234 "is_configured": true, 00:11:51.234 "data_offset": 0, 00:11:51.234 "data_size": 65536 00:11:51.234 }, 00:11:51.234 { 00:11:51.234 "name": "BaseBdev4", 00:11:51.234 "uuid": "791051ed-d0ca-4c7a-915c-ea916a37b1d0", 00:11:51.234 "is_configured": true, 00:11:51.234 "data_offset": 0, 00:11:51.234 "data_size": 65536 00:11:51.234 } 00:11:51.234 ] 00:11:51.234 } 00:11:51.234 } 00:11:51.234 }' 00:11:51.234 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:51.493 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:51.493 BaseBdev2 00:11:51.493 BaseBdev3 00:11:51.493 BaseBdev4' 00:11:51.493 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.493 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:51.493 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.493 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:51.493 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.493 12:37:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.493 12:37:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.493 [2024-12-14 12:37:51.166573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.493 [2024-12-14 12:37:51.166608] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.493 [2024-12-14 12:37:51.166695] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.493 [2024-12-14 12:37:51.167013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.493 [2024-12-14 12:37:51.167034] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74959 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74959 ']' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74959 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74959 00:11:51.493 killing process with pid 74959 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74959' 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74959 00:11:51.493 [2024-12-14 12:37:51.214618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.493 12:37:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74959 00:11:52.060 [2024-12-14 12:37:51.622399] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.439 ************************************ 00:11:53.439 END TEST raid_state_function_test 00:11:53.439 ************************************ 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:53.439 00:11:53.439 real 0m11.428s 00:11:53.439 user 0m18.157s 00:11:53.439 sys 0m1.941s 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.439 12:37:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:53.439 12:37:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:53.439 12:37:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.439 12:37:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.439 ************************************ 00:11:53.439 START TEST raid_state_function_test_sb 00:11:53.439 ************************************ 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=75629 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:53.439 Process raid pid: 75629 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 75629' 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 75629 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75629 ']' 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:53.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:53.439 12:37:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.439 [2024-12-14 12:37:52.963974] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:53.439 [2024-12-14 12:37:52.964106] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.439 [2024-12-14 12:37:53.123770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.698 [2024-12-14 12:37:53.241007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.957 [2024-12-14 12:37:53.449799] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:53.957 [2024-12-14 12:37:53.449849] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.216 [2024-12-14 12:37:53.812862] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.216 [2024-12-14 12:37:53.812915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.216 [2024-12-14 12:37:53.812925] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.216 [2024-12-14 12:37:53.812951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.216 [2024-12-14 12:37:53.812958] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.216 [2024-12-14 12:37:53.812968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.216 [2024-12-14 12:37:53.812975] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.216 [2024-12-14 12:37:53.812985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.216 "name": "Existed_Raid", 00:11:54.216 "uuid": "f27a264c-32e9-49a5-bf0c-ebb957fb1515", 00:11:54.216 "strip_size_kb": 0, 00:11:54.216 "state": "configuring", 00:11:54.216 "raid_level": "raid1", 00:11:54.216 "superblock": true, 00:11:54.216 "num_base_bdevs": 4, 00:11:54.216 "num_base_bdevs_discovered": 0, 00:11:54.216 "num_base_bdevs_operational": 4, 00:11:54.216 "base_bdevs_list": [ 00:11:54.216 { 00:11:54.216 "name": "BaseBdev1", 00:11:54.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.216 "is_configured": false, 00:11:54.216 "data_offset": 0, 00:11:54.216 "data_size": 0 00:11:54.216 }, 00:11:54.216 { 00:11:54.216 "name": "BaseBdev2", 00:11:54.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.216 "is_configured": false, 00:11:54.216 "data_offset": 0, 00:11:54.216 "data_size": 0 00:11:54.216 }, 00:11:54.216 { 00:11:54.216 "name": "BaseBdev3", 00:11:54.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.216 "is_configured": false, 00:11:54.216 "data_offset": 0, 00:11:54.216 "data_size": 0 00:11:54.216 }, 00:11:54.216 { 00:11:54.216 "name": "BaseBdev4", 00:11:54.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.216 "is_configured": false, 00:11:54.216 "data_offset": 0, 00:11:54.216 "data_size": 0 00:11:54.216 } 00:11:54.216 ] 00:11:54.216 }' 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.216 12:37:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.785 [2024-12-14 12:37:54.220122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:54.785 [2024-12-14 12:37:54.220168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.785 [2024-12-14 12:37:54.232092] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.785 [2024-12-14 12:37:54.232132] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.785 [2024-12-14 12:37:54.232141] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.785 [2024-12-14 12:37:54.232151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.785 [2024-12-14 12:37:54.232157] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.785 [2024-12-14 12:37:54.232166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.785 [2024-12-14 12:37:54.232173] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.785 [2024-12-14 12:37:54.232181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.785 [2024-12-14 12:37:54.280530] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.785 BaseBdev1 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.785 [ 00:11:54.785 { 00:11:54.785 "name": "BaseBdev1", 00:11:54.785 "aliases": [ 00:11:54.785 "744141ad-6fe1-406f-8fea-c657a2c26576" 00:11:54.785 ], 00:11:54.785 "product_name": "Malloc disk", 00:11:54.785 "block_size": 512, 00:11:54.785 "num_blocks": 65536, 00:11:54.785 "uuid": "744141ad-6fe1-406f-8fea-c657a2c26576", 00:11:54.785 "assigned_rate_limits": { 00:11:54.785 "rw_ios_per_sec": 0, 00:11:54.785 "rw_mbytes_per_sec": 0, 00:11:54.785 "r_mbytes_per_sec": 0, 00:11:54.785 "w_mbytes_per_sec": 0 00:11:54.785 }, 00:11:54.785 "claimed": true, 00:11:54.785 "claim_type": "exclusive_write", 00:11:54.785 "zoned": false, 00:11:54.785 "supported_io_types": { 00:11:54.785 "read": true, 00:11:54.785 "write": true, 00:11:54.785 "unmap": true, 00:11:54.785 "flush": true, 00:11:54.785 "reset": true, 00:11:54.785 "nvme_admin": false, 00:11:54.785 "nvme_io": false, 00:11:54.785 "nvme_io_md": false, 00:11:54.785 "write_zeroes": true, 00:11:54.785 "zcopy": true, 00:11:54.785 "get_zone_info": false, 00:11:54.785 "zone_management": false, 00:11:54.785 "zone_append": false, 00:11:54.785 "compare": false, 00:11:54.785 "compare_and_write": false, 00:11:54.785 "abort": true, 00:11:54.785 "seek_hole": false, 00:11:54.785 "seek_data": false, 00:11:54.785 "copy": true, 00:11:54.785 "nvme_iov_md": false 00:11:54.785 }, 00:11:54.785 "memory_domains": [ 00:11:54.785 { 00:11:54.785 "dma_device_id": "system", 00:11:54.785 "dma_device_type": 1 00:11:54.785 }, 00:11:54.785 { 00:11:54.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.785 "dma_device_type": 2 00:11:54.785 } 00:11:54.785 ], 00:11:54.785 "driver_specific": {} 00:11:54.785 } 00:11:54.785 ] 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.785 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.785 "name": "Existed_Raid", 00:11:54.785 "uuid": "213b3af4-80ec-4ec3-be4b-eda34bdd86fd", 00:11:54.785 "strip_size_kb": 0, 00:11:54.785 "state": "configuring", 00:11:54.785 "raid_level": "raid1", 00:11:54.785 "superblock": true, 00:11:54.785 "num_base_bdevs": 4, 00:11:54.785 "num_base_bdevs_discovered": 1, 00:11:54.785 "num_base_bdevs_operational": 4, 00:11:54.785 "base_bdevs_list": [ 00:11:54.785 { 00:11:54.785 "name": "BaseBdev1", 00:11:54.785 "uuid": "744141ad-6fe1-406f-8fea-c657a2c26576", 00:11:54.785 "is_configured": true, 00:11:54.785 "data_offset": 2048, 00:11:54.785 "data_size": 63488 00:11:54.785 }, 00:11:54.785 { 00:11:54.786 "name": "BaseBdev2", 00:11:54.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.786 "is_configured": false, 00:11:54.786 "data_offset": 0, 00:11:54.786 "data_size": 0 00:11:54.786 }, 00:11:54.786 { 00:11:54.786 "name": "BaseBdev3", 00:11:54.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.786 "is_configured": false, 00:11:54.786 "data_offset": 0, 00:11:54.786 "data_size": 0 00:11:54.786 }, 00:11:54.786 { 00:11:54.786 "name": "BaseBdev4", 00:11:54.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.786 "is_configured": false, 00:11:54.786 "data_offset": 0, 00:11:54.786 "data_size": 0 00:11:54.786 } 00:11:54.786 ] 00:11:54.786 }' 00:11:54.786 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.786 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.044 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.044 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.044 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.303 [2024-12-14 12:37:54.783746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.303 [2024-12-14 12:37:54.783814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:55.303 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.303 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.303 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.303 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.303 [2024-12-14 12:37:54.795763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.303 [2024-12-14 12:37:54.797626] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.303 [2024-12-14 12:37:54.797670] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.304 [2024-12-14 12:37:54.797681] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.304 [2024-12-14 12:37:54.797692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.304 [2024-12-14 12:37:54.797699] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.304 [2024-12-14 12:37:54.797707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.304 "name": "Existed_Raid", 00:11:55.304 "uuid": "2e1289cb-f637-4271-9ee1-6ddd90878f87", 00:11:55.304 "strip_size_kb": 0, 00:11:55.304 "state": "configuring", 00:11:55.304 "raid_level": "raid1", 00:11:55.304 "superblock": true, 00:11:55.304 "num_base_bdevs": 4, 00:11:55.304 "num_base_bdevs_discovered": 1, 00:11:55.304 "num_base_bdevs_operational": 4, 00:11:55.304 "base_bdevs_list": [ 00:11:55.304 { 00:11:55.304 "name": "BaseBdev1", 00:11:55.304 "uuid": "744141ad-6fe1-406f-8fea-c657a2c26576", 00:11:55.304 "is_configured": true, 00:11:55.304 "data_offset": 2048, 00:11:55.304 "data_size": 63488 00:11:55.304 }, 00:11:55.304 { 00:11:55.304 "name": "BaseBdev2", 00:11:55.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.304 "is_configured": false, 00:11:55.304 "data_offset": 0, 00:11:55.304 "data_size": 0 00:11:55.304 }, 00:11:55.304 { 00:11:55.304 "name": "BaseBdev3", 00:11:55.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.304 "is_configured": false, 00:11:55.304 "data_offset": 0, 00:11:55.304 "data_size": 0 00:11:55.304 }, 00:11:55.304 { 00:11:55.304 "name": "BaseBdev4", 00:11:55.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.304 "is_configured": false, 00:11:55.304 "data_offset": 0, 00:11:55.304 "data_size": 0 00:11:55.304 } 00:11:55.304 ] 00:11:55.304 }' 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.304 12:37:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.562 [2024-12-14 12:37:55.274603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.562 BaseBdev2 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.562 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.820 [ 00:11:55.820 { 00:11:55.820 "name": "BaseBdev2", 00:11:55.820 "aliases": [ 00:11:55.820 "25c6af5f-015d-4801-8e95-f626356d16af" 00:11:55.820 ], 00:11:55.820 "product_name": "Malloc disk", 00:11:55.820 "block_size": 512, 00:11:55.820 "num_blocks": 65536, 00:11:55.820 "uuid": "25c6af5f-015d-4801-8e95-f626356d16af", 00:11:55.820 "assigned_rate_limits": { 00:11:55.820 "rw_ios_per_sec": 0, 00:11:55.820 "rw_mbytes_per_sec": 0, 00:11:55.820 "r_mbytes_per_sec": 0, 00:11:55.820 "w_mbytes_per_sec": 0 00:11:55.820 }, 00:11:55.820 "claimed": true, 00:11:55.820 "claim_type": "exclusive_write", 00:11:55.820 "zoned": false, 00:11:55.820 "supported_io_types": { 00:11:55.820 "read": true, 00:11:55.820 "write": true, 00:11:55.820 "unmap": true, 00:11:55.820 "flush": true, 00:11:55.820 "reset": true, 00:11:55.820 "nvme_admin": false, 00:11:55.820 "nvme_io": false, 00:11:55.820 "nvme_io_md": false, 00:11:55.820 "write_zeroes": true, 00:11:55.820 "zcopy": true, 00:11:55.820 "get_zone_info": false, 00:11:55.820 "zone_management": false, 00:11:55.820 "zone_append": false, 00:11:55.820 "compare": false, 00:11:55.820 "compare_and_write": false, 00:11:55.820 "abort": true, 00:11:55.820 "seek_hole": false, 00:11:55.820 "seek_data": false, 00:11:55.821 "copy": true, 00:11:55.821 "nvme_iov_md": false 00:11:55.821 }, 00:11:55.821 "memory_domains": [ 00:11:55.821 { 00:11:55.821 "dma_device_id": "system", 00:11:55.821 "dma_device_type": 1 00:11:55.821 }, 00:11:55.821 { 00:11:55.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.821 "dma_device_type": 2 00:11:55.821 } 00:11:55.821 ], 00:11:55.821 "driver_specific": {} 00:11:55.821 } 00:11:55.821 ] 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.821 "name": "Existed_Raid", 00:11:55.821 "uuid": "2e1289cb-f637-4271-9ee1-6ddd90878f87", 00:11:55.821 "strip_size_kb": 0, 00:11:55.821 "state": "configuring", 00:11:55.821 "raid_level": "raid1", 00:11:55.821 "superblock": true, 00:11:55.821 "num_base_bdevs": 4, 00:11:55.821 "num_base_bdevs_discovered": 2, 00:11:55.821 "num_base_bdevs_operational": 4, 00:11:55.821 "base_bdevs_list": [ 00:11:55.821 { 00:11:55.821 "name": "BaseBdev1", 00:11:55.821 "uuid": "744141ad-6fe1-406f-8fea-c657a2c26576", 00:11:55.821 "is_configured": true, 00:11:55.821 "data_offset": 2048, 00:11:55.821 "data_size": 63488 00:11:55.821 }, 00:11:55.821 { 00:11:55.821 "name": "BaseBdev2", 00:11:55.821 "uuid": "25c6af5f-015d-4801-8e95-f626356d16af", 00:11:55.821 "is_configured": true, 00:11:55.821 "data_offset": 2048, 00:11:55.821 "data_size": 63488 00:11:55.821 }, 00:11:55.821 { 00:11:55.821 "name": "BaseBdev3", 00:11:55.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.821 "is_configured": false, 00:11:55.821 "data_offset": 0, 00:11:55.821 "data_size": 0 00:11:55.821 }, 00:11:55.821 { 00:11:55.821 "name": "BaseBdev4", 00:11:55.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.821 "is_configured": false, 00:11:55.821 "data_offset": 0, 00:11:55.821 "data_size": 0 00:11:55.821 } 00:11:55.821 ] 00:11:55.821 }' 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.821 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.080 [2024-12-14 12:37:55.782996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.080 BaseBdev3 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.080 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.080 [ 00:11:56.080 { 00:11:56.080 "name": "BaseBdev3", 00:11:56.080 "aliases": [ 00:11:56.080 "238dece8-938a-4eae-865b-bbf5027226fe" 00:11:56.080 ], 00:11:56.080 "product_name": "Malloc disk", 00:11:56.080 "block_size": 512, 00:11:56.080 "num_blocks": 65536, 00:11:56.080 "uuid": "238dece8-938a-4eae-865b-bbf5027226fe", 00:11:56.080 "assigned_rate_limits": { 00:11:56.080 "rw_ios_per_sec": 0, 00:11:56.080 "rw_mbytes_per_sec": 0, 00:11:56.080 "r_mbytes_per_sec": 0, 00:11:56.080 "w_mbytes_per_sec": 0 00:11:56.080 }, 00:11:56.080 "claimed": true, 00:11:56.080 "claim_type": "exclusive_write", 00:11:56.080 "zoned": false, 00:11:56.339 "supported_io_types": { 00:11:56.339 "read": true, 00:11:56.339 "write": true, 00:11:56.339 "unmap": true, 00:11:56.339 "flush": true, 00:11:56.339 "reset": true, 00:11:56.339 "nvme_admin": false, 00:11:56.339 "nvme_io": false, 00:11:56.339 "nvme_io_md": false, 00:11:56.339 "write_zeroes": true, 00:11:56.339 "zcopy": true, 00:11:56.339 "get_zone_info": false, 00:11:56.339 "zone_management": false, 00:11:56.339 "zone_append": false, 00:11:56.339 "compare": false, 00:11:56.339 "compare_and_write": false, 00:11:56.339 "abort": true, 00:11:56.339 "seek_hole": false, 00:11:56.339 "seek_data": false, 00:11:56.339 "copy": true, 00:11:56.339 "nvme_iov_md": false 00:11:56.339 }, 00:11:56.339 "memory_domains": [ 00:11:56.339 { 00:11:56.339 "dma_device_id": "system", 00:11:56.339 "dma_device_type": 1 00:11:56.339 }, 00:11:56.339 { 00:11:56.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.339 "dma_device_type": 2 00:11:56.339 } 00:11:56.339 ], 00:11:56.339 "driver_specific": {} 00:11:56.339 } 00:11:56.339 ] 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.339 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.339 "name": "Existed_Raid", 00:11:56.339 "uuid": "2e1289cb-f637-4271-9ee1-6ddd90878f87", 00:11:56.339 "strip_size_kb": 0, 00:11:56.339 "state": "configuring", 00:11:56.339 "raid_level": "raid1", 00:11:56.339 "superblock": true, 00:11:56.339 "num_base_bdevs": 4, 00:11:56.339 "num_base_bdevs_discovered": 3, 00:11:56.339 "num_base_bdevs_operational": 4, 00:11:56.339 "base_bdevs_list": [ 00:11:56.339 { 00:11:56.339 "name": "BaseBdev1", 00:11:56.339 "uuid": "744141ad-6fe1-406f-8fea-c657a2c26576", 00:11:56.339 "is_configured": true, 00:11:56.339 "data_offset": 2048, 00:11:56.339 "data_size": 63488 00:11:56.339 }, 00:11:56.339 { 00:11:56.340 "name": "BaseBdev2", 00:11:56.340 "uuid": "25c6af5f-015d-4801-8e95-f626356d16af", 00:11:56.340 "is_configured": true, 00:11:56.340 "data_offset": 2048, 00:11:56.340 "data_size": 63488 00:11:56.340 }, 00:11:56.340 { 00:11:56.340 "name": "BaseBdev3", 00:11:56.340 "uuid": "238dece8-938a-4eae-865b-bbf5027226fe", 00:11:56.340 "is_configured": true, 00:11:56.340 "data_offset": 2048, 00:11:56.340 "data_size": 63488 00:11:56.340 }, 00:11:56.340 { 00:11:56.340 "name": "BaseBdev4", 00:11:56.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.340 "is_configured": false, 00:11:56.340 "data_offset": 0, 00:11:56.340 "data_size": 0 00:11:56.340 } 00:11:56.340 ] 00:11:56.340 }' 00:11:56.340 12:37:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.340 12:37:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.599 [2024-12-14 12:37:56.255013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:56.599 [2024-12-14 12:37:56.255299] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:56.599 [2024-12-14 12:37:56.255318] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.599 [2024-12-14 12:37:56.255611] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:56.599 BaseBdev4 00:11:56.599 [2024-12-14 12:37:56.255803] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:56.599 [2024-12-14 12:37:56.255830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:56.599 [2024-12-14 12:37:56.255998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.599 [ 00:11:56.599 { 00:11:56.599 "name": "BaseBdev4", 00:11:56.599 "aliases": [ 00:11:56.599 "849d534a-f42c-417f-8b83-81915c7d8309" 00:11:56.599 ], 00:11:56.599 "product_name": "Malloc disk", 00:11:56.599 "block_size": 512, 00:11:56.599 "num_blocks": 65536, 00:11:56.599 "uuid": "849d534a-f42c-417f-8b83-81915c7d8309", 00:11:56.599 "assigned_rate_limits": { 00:11:56.599 "rw_ios_per_sec": 0, 00:11:56.599 "rw_mbytes_per_sec": 0, 00:11:56.599 "r_mbytes_per_sec": 0, 00:11:56.599 "w_mbytes_per_sec": 0 00:11:56.599 }, 00:11:56.599 "claimed": true, 00:11:56.599 "claim_type": "exclusive_write", 00:11:56.599 "zoned": false, 00:11:56.599 "supported_io_types": { 00:11:56.599 "read": true, 00:11:56.599 "write": true, 00:11:56.599 "unmap": true, 00:11:56.599 "flush": true, 00:11:56.599 "reset": true, 00:11:56.599 "nvme_admin": false, 00:11:56.599 "nvme_io": false, 00:11:56.599 "nvme_io_md": false, 00:11:56.599 "write_zeroes": true, 00:11:56.599 "zcopy": true, 00:11:56.599 "get_zone_info": false, 00:11:56.599 "zone_management": false, 00:11:56.599 "zone_append": false, 00:11:56.599 "compare": false, 00:11:56.599 "compare_and_write": false, 00:11:56.599 "abort": true, 00:11:56.599 "seek_hole": false, 00:11:56.599 "seek_data": false, 00:11:56.599 "copy": true, 00:11:56.599 "nvme_iov_md": false 00:11:56.599 }, 00:11:56.599 "memory_domains": [ 00:11:56.599 { 00:11:56.599 "dma_device_id": "system", 00:11:56.599 "dma_device_type": 1 00:11:56.599 }, 00:11:56.599 { 00:11:56.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.599 "dma_device_type": 2 00:11:56.599 } 00:11:56.599 ], 00:11:56.599 "driver_specific": {} 00:11:56.599 } 00:11:56.599 ] 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.599 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.859 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.859 "name": "Existed_Raid", 00:11:56.859 "uuid": "2e1289cb-f637-4271-9ee1-6ddd90878f87", 00:11:56.859 "strip_size_kb": 0, 00:11:56.859 "state": "online", 00:11:56.859 "raid_level": "raid1", 00:11:56.859 "superblock": true, 00:11:56.859 "num_base_bdevs": 4, 00:11:56.859 "num_base_bdevs_discovered": 4, 00:11:56.859 "num_base_bdevs_operational": 4, 00:11:56.859 "base_bdevs_list": [ 00:11:56.859 { 00:11:56.859 "name": "BaseBdev1", 00:11:56.859 "uuid": "744141ad-6fe1-406f-8fea-c657a2c26576", 00:11:56.859 "is_configured": true, 00:11:56.859 "data_offset": 2048, 00:11:56.859 "data_size": 63488 00:11:56.859 }, 00:11:56.859 { 00:11:56.859 "name": "BaseBdev2", 00:11:56.859 "uuid": "25c6af5f-015d-4801-8e95-f626356d16af", 00:11:56.859 "is_configured": true, 00:11:56.859 "data_offset": 2048, 00:11:56.859 "data_size": 63488 00:11:56.859 }, 00:11:56.859 { 00:11:56.859 "name": "BaseBdev3", 00:11:56.859 "uuid": "238dece8-938a-4eae-865b-bbf5027226fe", 00:11:56.859 "is_configured": true, 00:11:56.859 "data_offset": 2048, 00:11:56.859 "data_size": 63488 00:11:56.859 }, 00:11:56.859 { 00:11:56.859 "name": "BaseBdev4", 00:11:56.859 "uuid": "849d534a-f42c-417f-8b83-81915c7d8309", 00:11:56.859 "is_configured": true, 00:11:56.859 "data_offset": 2048, 00:11:56.859 "data_size": 63488 00:11:56.859 } 00:11:56.859 ] 00:11:56.859 }' 00:11:56.859 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.859 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.118 [2024-12-14 12:37:56.738606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.118 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.118 "name": "Existed_Raid", 00:11:57.118 "aliases": [ 00:11:57.118 "2e1289cb-f637-4271-9ee1-6ddd90878f87" 00:11:57.118 ], 00:11:57.118 "product_name": "Raid Volume", 00:11:57.118 "block_size": 512, 00:11:57.118 "num_blocks": 63488, 00:11:57.118 "uuid": "2e1289cb-f637-4271-9ee1-6ddd90878f87", 00:11:57.118 "assigned_rate_limits": { 00:11:57.118 "rw_ios_per_sec": 0, 00:11:57.118 "rw_mbytes_per_sec": 0, 00:11:57.118 "r_mbytes_per_sec": 0, 00:11:57.118 "w_mbytes_per_sec": 0 00:11:57.118 }, 00:11:57.118 "claimed": false, 00:11:57.118 "zoned": false, 00:11:57.118 "supported_io_types": { 00:11:57.118 "read": true, 00:11:57.118 "write": true, 00:11:57.118 "unmap": false, 00:11:57.118 "flush": false, 00:11:57.118 "reset": true, 00:11:57.118 "nvme_admin": false, 00:11:57.118 "nvme_io": false, 00:11:57.118 "nvme_io_md": false, 00:11:57.118 "write_zeroes": true, 00:11:57.118 "zcopy": false, 00:11:57.118 "get_zone_info": false, 00:11:57.118 "zone_management": false, 00:11:57.118 "zone_append": false, 00:11:57.118 "compare": false, 00:11:57.118 "compare_and_write": false, 00:11:57.118 "abort": false, 00:11:57.118 "seek_hole": false, 00:11:57.118 "seek_data": false, 00:11:57.118 "copy": false, 00:11:57.119 "nvme_iov_md": false 00:11:57.119 }, 00:11:57.119 "memory_domains": [ 00:11:57.119 { 00:11:57.119 "dma_device_id": "system", 00:11:57.119 "dma_device_type": 1 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.119 "dma_device_type": 2 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "dma_device_id": "system", 00:11:57.119 "dma_device_type": 1 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.119 "dma_device_type": 2 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "dma_device_id": "system", 00:11:57.119 "dma_device_type": 1 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.119 "dma_device_type": 2 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "dma_device_id": "system", 00:11:57.119 "dma_device_type": 1 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.119 "dma_device_type": 2 00:11:57.119 } 00:11:57.119 ], 00:11:57.119 "driver_specific": { 00:11:57.119 "raid": { 00:11:57.119 "uuid": "2e1289cb-f637-4271-9ee1-6ddd90878f87", 00:11:57.119 "strip_size_kb": 0, 00:11:57.119 "state": "online", 00:11:57.119 "raid_level": "raid1", 00:11:57.119 "superblock": true, 00:11:57.119 "num_base_bdevs": 4, 00:11:57.119 "num_base_bdevs_discovered": 4, 00:11:57.119 "num_base_bdevs_operational": 4, 00:11:57.119 "base_bdevs_list": [ 00:11:57.119 { 00:11:57.119 "name": "BaseBdev1", 00:11:57.119 "uuid": "744141ad-6fe1-406f-8fea-c657a2c26576", 00:11:57.119 "is_configured": true, 00:11:57.119 "data_offset": 2048, 00:11:57.119 "data_size": 63488 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "name": "BaseBdev2", 00:11:57.119 "uuid": "25c6af5f-015d-4801-8e95-f626356d16af", 00:11:57.119 "is_configured": true, 00:11:57.119 "data_offset": 2048, 00:11:57.119 "data_size": 63488 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "name": "BaseBdev3", 00:11:57.119 "uuid": "238dece8-938a-4eae-865b-bbf5027226fe", 00:11:57.119 "is_configured": true, 00:11:57.119 "data_offset": 2048, 00:11:57.119 "data_size": 63488 00:11:57.119 }, 00:11:57.119 { 00:11:57.119 "name": "BaseBdev4", 00:11:57.119 "uuid": "849d534a-f42c-417f-8b83-81915c7d8309", 00:11:57.119 "is_configured": true, 00:11:57.119 "data_offset": 2048, 00:11:57.119 "data_size": 63488 00:11:57.119 } 00:11:57.119 ] 00:11:57.119 } 00:11:57.119 } 00:11:57.119 }' 00:11:57.119 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.119 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:57.119 BaseBdev2 00:11:57.119 BaseBdev3 00:11:57.119 BaseBdev4' 00:11:57.119 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.378 12:37:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.378 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.378 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.378 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.378 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:57.378 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.379 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.379 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.379 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.379 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.379 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.379 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.379 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.379 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.379 [2024-12-14 12:37:57.049913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.638 "name": "Existed_Raid", 00:11:57.638 "uuid": "2e1289cb-f637-4271-9ee1-6ddd90878f87", 00:11:57.638 "strip_size_kb": 0, 00:11:57.638 "state": "online", 00:11:57.638 "raid_level": "raid1", 00:11:57.638 "superblock": true, 00:11:57.638 "num_base_bdevs": 4, 00:11:57.638 "num_base_bdevs_discovered": 3, 00:11:57.638 "num_base_bdevs_operational": 3, 00:11:57.638 "base_bdevs_list": [ 00:11:57.638 { 00:11:57.638 "name": null, 00:11:57.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.638 "is_configured": false, 00:11:57.638 "data_offset": 0, 00:11:57.638 "data_size": 63488 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "name": "BaseBdev2", 00:11:57.638 "uuid": "25c6af5f-015d-4801-8e95-f626356d16af", 00:11:57.638 "is_configured": true, 00:11:57.638 "data_offset": 2048, 00:11:57.638 "data_size": 63488 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "name": "BaseBdev3", 00:11:57.638 "uuid": "238dece8-938a-4eae-865b-bbf5027226fe", 00:11:57.638 "is_configured": true, 00:11:57.638 "data_offset": 2048, 00:11:57.638 "data_size": 63488 00:11:57.638 }, 00:11:57.638 { 00:11:57.638 "name": "BaseBdev4", 00:11:57.638 "uuid": "849d534a-f42c-417f-8b83-81915c7d8309", 00:11:57.638 "is_configured": true, 00:11:57.638 "data_offset": 2048, 00:11:57.638 "data_size": 63488 00:11:57.638 } 00:11:57.638 ] 00:11:57.638 }' 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.638 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.898 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.898 [2024-12-14 12:37:57.629330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.157 [2024-12-14 12:37:57.780001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.157 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.421 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.421 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.421 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.421 12:37:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:58.421 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.421 12:37:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.421 [2024-12-14 12:37:57.935681] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:58.421 [2024-12-14 12:37:57.935784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.421 [2024-12-14 12:37:58.031719] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.421 [2024-12-14 12:37:58.031778] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.421 [2024-12-14 12:37:58.031790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.421 BaseBdev2 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:58.421 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.422 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.422 [ 00:11:58.422 { 00:11:58.422 "name": "BaseBdev2", 00:11:58.422 "aliases": [ 00:11:58.422 "53c05357-71b7-436e-ad9c-f0248a69ba4f" 00:11:58.422 ], 00:11:58.422 "product_name": "Malloc disk", 00:11:58.422 "block_size": 512, 00:11:58.422 "num_blocks": 65536, 00:11:58.422 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:11:58.422 "assigned_rate_limits": { 00:11:58.422 "rw_ios_per_sec": 0, 00:11:58.422 "rw_mbytes_per_sec": 0, 00:11:58.422 "r_mbytes_per_sec": 0, 00:11:58.422 "w_mbytes_per_sec": 0 00:11:58.422 }, 00:11:58.422 "claimed": false, 00:11:58.422 "zoned": false, 00:11:58.422 "supported_io_types": { 00:11:58.422 "read": true, 00:11:58.422 "write": true, 00:11:58.683 "unmap": true, 00:11:58.683 "flush": true, 00:11:58.683 "reset": true, 00:11:58.683 "nvme_admin": false, 00:11:58.683 "nvme_io": false, 00:11:58.683 "nvme_io_md": false, 00:11:58.683 "write_zeroes": true, 00:11:58.683 "zcopy": true, 00:11:58.683 "get_zone_info": false, 00:11:58.683 "zone_management": false, 00:11:58.683 "zone_append": false, 00:11:58.683 "compare": false, 00:11:58.683 "compare_and_write": false, 00:11:58.683 "abort": true, 00:11:58.683 "seek_hole": false, 00:11:58.683 "seek_data": false, 00:11:58.683 "copy": true, 00:11:58.683 "nvme_iov_md": false 00:11:58.683 }, 00:11:58.683 "memory_domains": [ 00:11:58.683 { 00:11:58.683 "dma_device_id": "system", 00:11:58.683 "dma_device_type": 1 00:11:58.683 }, 00:11:58.683 { 00:11:58.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.683 "dma_device_type": 2 00:11:58.683 } 00:11:58.683 ], 00:11:58.683 "driver_specific": {} 00:11:58.683 } 00:11:58.683 ] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.683 BaseBdev3 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.683 [ 00:11:58.683 { 00:11:58.683 "name": "BaseBdev3", 00:11:58.683 "aliases": [ 00:11:58.683 "35d65095-19b5-4a1b-8bac-9473e14ac2ea" 00:11:58.683 ], 00:11:58.683 "product_name": "Malloc disk", 00:11:58.683 "block_size": 512, 00:11:58.683 "num_blocks": 65536, 00:11:58.683 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:11:58.683 "assigned_rate_limits": { 00:11:58.683 "rw_ios_per_sec": 0, 00:11:58.683 "rw_mbytes_per_sec": 0, 00:11:58.683 "r_mbytes_per_sec": 0, 00:11:58.683 "w_mbytes_per_sec": 0 00:11:58.683 }, 00:11:58.683 "claimed": false, 00:11:58.683 "zoned": false, 00:11:58.683 "supported_io_types": { 00:11:58.683 "read": true, 00:11:58.683 "write": true, 00:11:58.683 "unmap": true, 00:11:58.683 "flush": true, 00:11:58.683 "reset": true, 00:11:58.683 "nvme_admin": false, 00:11:58.683 "nvme_io": false, 00:11:58.683 "nvme_io_md": false, 00:11:58.683 "write_zeroes": true, 00:11:58.683 "zcopy": true, 00:11:58.683 "get_zone_info": false, 00:11:58.683 "zone_management": false, 00:11:58.683 "zone_append": false, 00:11:58.683 "compare": false, 00:11:58.683 "compare_and_write": false, 00:11:58.683 "abort": true, 00:11:58.683 "seek_hole": false, 00:11:58.683 "seek_data": false, 00:11:58.683 "copy": true, 00:11:58.683 "nvme_iov_md": false 00:11:58.683 }, 00:11:58.683 "memory_domains": [ 00:11:58.683 { 00:11:58.683 "dma_device_id": "system", 00:11:58.683 "dma_device_type": 1 00:11:58.683 }, 00:11:58.683 { 00:11:58.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.683 "dma_device_type": 2 00:11:58.683 } 00:11:58.683 ], 00:11:58.683 "driver_specific": {} 00:11:58.683 } 00:11:58.683 ] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.683 BaseBdev4 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.683 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.683 [ 00:11:58.683 { 00:11:58.683 "name": "BaseBdev4", 00:11:58.683 "aliases": [ 00:11:58.684 "52a723a4-c9ce-4b9d-a538-937ad81e91eb" 00:11:58.684 ], 00:11:58.684 "product_name": "Malloc disk", 00:11:58.684 "block_size": 512, 00:11:58.684 "num_blocks": 65536, 00:11:58.684 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:11:58.684 "assigned_rate_limits": { 00:11:58.684 "rw_ios_per_sec": 0, 00:11:58.684 "rw_mbytes_per_sec": 0, 00:11:58.684 "r_mbytes_per_sec": 0, 00:11:58.684 "w_mbytes_per_sec": 0 00:11:58.684 }, 00:11:58.684 "claimed": false, 00:11:58.684 "zoned": false, 00:11:58.684 "supported_io_types": { 00:11:58.684 "read": true, 00:11:58.684 "write": true, 00:11:58.684 "unmap": true, 00:11:58.684 "flush": true, 00:11:58.684 "reset": true, 00:11:58.684 "nvme_admin": false, 00:11:58.684 "nvme_io": false, 00:11:58.684 "nvme_io_md": false, 00:11:58.684 "write_zeroes": true, 00:11:58.684 "zcopy": true, 00:11:58.684 "get_zone_info": false, 00:11:58.684 "zone_management": false, 00:11:58.684 "zone_append": false, 00:11:58.684 "compare": false, 00:11:58.684 "compare_and_write": false, 00:11:58.684 "abort": true, 00:11:58.684 "seek_hole": false, 00:11:58.684 "seek_data": false, 00:11:58.684 "copy": true, 00:11:58.684 "nvme_iov_md": false 00:11:58.684 }, 00:11:58.684 "memory_domains": [ 00:11:58.684 { 00:11:58.684 "dma_device_id": "system", 00:11:58.684 "dma_device_type": 1 00:11:58.684 }, 00:11:58.684 { 00:11:58.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.684 "dma_device_type": 2 00:11:58.684 } 00:11:58.684 ], 00:11:58.684 "driver_specific": {} 00:11:58.684 } 00:11:58.684 ] 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.684 [2024-12-14 12:37:58.337015] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.684 [2024-12-14 12:37:58.337115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.684 [2024-12-14 12:37:58.337181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:58.684 [2024-12-14 12:37:58.339322] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.684 [2024-12-14 12:37:58.339428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.684 "name": "Existed_Raid", 00:11:58.684 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:11:58.684 "strip_size_kb": 0, 00:11:58.684 "state": "configuring", 00:11:58.684 "raid_level": "raid1", 00:11:58.684 "superblock": true, 00:11:58.684 "num_base_bdevs": 4, 00:11:58.684 "num_base_bdevs_discovered": 3, 00:11:58.684 "num_base_bdevs_operational": 4, 00:11:58.684 "base_bdevs_list": [ 00:11:58.684 { 00:11:58.684 "name": "BaseBdev1", 00:11:58.684 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.684 "is_configured": false, 00:11:58.684 "data_offset": 0, 00:11:58.684 "data_size": 0 00:11:58.684 }, 00:11:58.684 { 00:11:58.684 "name": "BaseBdev2", 00:11:58.684 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:11:58.684 "is_configured": true, 00:11:58.684 "data_offset": 2048, 00:11:58.684 "data_size": 63488 00:11:58.684 }, 00:11:58.684 { 00:11:58.684 "name": "BaseBdev3", 00:11:58.684 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:11:58.684 "is_configured": true, 00:11:58.684 "data_offset": 2048, 00:11:58.684 "data_size": 63488 00:11:58.684 }, 00:11:58.684 { 00:11:58.684 "name": "BaseBdev4", 00:11:58.684 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:11:58.684 "is_configured": true, 00:11:58.684 "data_offset": 2048, 00:11:58.684 "data_size": 63488 00:11:58.684 } 00:11:58.684 ] 00:11:58.684 }' 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.684 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.252 [2024-12-14 12:37:58.760307] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.252 "name": "Existed_Raid", 00:11:59.252 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:11:59.252 "strip_size_kb": 0, 00:11:59.252 "state": "configuring", 00:11:59.252 "raid_level": "raid1", 00:11:59.252 "superblock": true, 00:11:59.252 "num_base_bdevs": 4, 00:11:59.252 "num_base_bdevs_discovered": 2, 00:11:59.252 "num_base_bdevs_operational": 4, 00:11:59.252 "base_bdevs_list": [ 00:11:59.252 { 00:11:59.252 "name": "BaseBdev1", 00:11:59.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.252 "is_configured": false, 00:11:59.252 "data_offset": 0, 00:11:59.252 "data_size": 0 00:11:59.252 }, 00:11:59.252 { 00:11:59.252 "name": null, 00:11:59.252 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:11:59.252 "is_configured": false, 00:11:59.252 "data_offset": 0, 00:11:59.252 "data_size": 63488 00:11:59.252 }, 00:11:59.252 { 00:11:59.252 "name": "BaseBdev3", 00:11:59.252 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:11:59.252 "is_configured": true, 00:11:59.252 "data_offset": 2048, 00:11:59.252 "data_size": 63488 00:11:59.252 }, 00:11:59.252 { 00:11:59.252 "name": "BaseBdev4", 00:11:59.252 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:11:59.252 "is_configured": true, 00:11:59.252 "data_offset": 2048, 00:11:59.252 "data_size": 63488 00:11:59.252 } 00:11:59.252 ] 00:11:59.252 }' 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.252 12:37:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.511 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.770 [2024-12-14 12:37:59.272778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.770 BaseBdev1 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.770 [ 00:11:59.770 { 00:11:59.770 "name": "BaseBdev1", 00:11:59.770 "aliases": [ 00:11:59.770 "5220b072-5355-4cfd-adea-106bd5c97ea5" 00:11:59.770 ], 00:11:59.770 "product_name": "Malloc disk", 00:11:59.770 "block_size": 512, 00:11:59.770 "num_blocks": 65536, 00:11:59.770 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:11:59.770 "assigned_rate_limits": { 00:11:59.770 "rw_ios_per_sec": 0, 00:11:59.770 "rw_mbytes_per_sec": 0, 00:11:59.770 "r_mbytes_per_sec": 0, 00:11:59.770 "w_mbytes_per_sec": 0 00:11:59.770 }, 00:11:59.770 "claimed": true, 00:11:59.770 "claim_type": "exclusive_write", 00:11:59.770 "zoned": false, 00:11:59.770 "supported_io_types": { 00:11:59.770 "read": true, 00:11:59.770 "write": true, 00:11:59.770 "unmap": true, 00:11:59.770 "flush": true, 00:11:59.770 "reset": true, 00:11:59.770 "nvme_admin": false, 00:11:59.770 "nvme_io": false, 00:11:59.770 "nvme_io_md": false, 00:11:59.770 "write_zeroes": true, 00:11:59.770 "zcopy": true, 00:11:59.770 "get_zone_info": false, 00:11:59.770 "zone_management": false, 00:11:59.770 "zone_append": false, 00:11:59.770 "compare": false, 00:11:59.770 "compare_and_write": false, 00:11:59.770 "abort": true, 00:11:59.770 "seek_hole": false, 00:11:59.770 "seek_data": false, 00:11:59.770 "copy": true, 00:11:59.770 "nvme_iov_md": false 00:11:59.770 }, 00:11:59.770 "memory_domains": [ 00:11:59.770 { 00:11:59.770 "dma_device_id": "system", 00:11:59.770 "dma_device_type": 1 00:11:59.770 }, 00:11:59.770 { 00:11:59.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.770 "dma_device_type": 2 00:11:59.770 } 00:11:59.770 ], 00:11:59.770 "driver_specific": {} 00:11:59.770 } 00:11:59.770 ] 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:59.770 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.771 "name": "Existed_Raid", 00:11:59.771 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:11:59.771 "strip_size_kb": 0, 00:11:59.771 "state": "configuring", 00:11:59.771 "raid_level": "raid1", 00:11:59.771 "superblock": true, 00:11:59.771 "num_base_bdevs": 4, 00:11:59.771 "num_base_bdevs_discovered": 3, 00:11:59.771 "num_base_bdevs_operational": 4, 00:11:59.771 "base_bdevs_list": [ 00:11:59.771 { 00:11:59.771 "name": "BaseBdev1", 00:11:59.771 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:11:59.771 "is_configured": true, 00:11:59.771 "data_offset": 2048, 00:11:59.771 "data_size": 63488 00:11:59.771 }, 00:11:59.771 { 00:11:59.771 "name": null, 00:11:59.771 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:11:59.771 "is_configured": false, 00:11:59.771 "data_offset": 0, 00:11:59.771 "data_size": 63488 00:11:59.771 }, 00:11:59.771 { 00:11:59.771 "name": "BaseBdev3", 00:11:59.771 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:11:59.771 "is_configured": true, 00:11:59.771 "data_offset": 2048, 00:11:59.771 "data_size": 63488 00:11:59.771 }, 00:11:59.771 { 00:11:59.771 "name": "BaseBdev4", 00:11:59.771 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:11:59.771 "is_configured": true, 00:11:59.771 "data_offset": 2048, 00:11:59.771 "data_size": 63488 00:11:59.771 } 00:11:59.771 ] 00:11:59.771 }' 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.771 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.029 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.029 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.029 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.030 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.030 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.030 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:00.030 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:00.030 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.030 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.288 [2024-12-14 12:37:59.768003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.288 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.288 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.288 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.288 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.289 "name": "Existed_Raid", 00:12:00.289 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:12:00.289 "strip_size_kb": 0, 00:12:00.289 "state": "configuring", 00:12:00.289 "raid_level": "raid1", 00:12:00.289 "superblock": true, 00:12:00.289 "num_base_bdevs": 4, 00:12:00.289 "num_base_bdevs_discovered": 2, 00:12:00.289 "num_base_bdevs_operational": 4, 00:12:00.289 "base_bdevs_list": [ 00:12:00.289 { 00:12:00.289 "name": "BaseBdev1", 00:12:00.289 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:12:00.289 "is_configured": true, 00:12:00.289 "data_offset": 2048, 00:12:00.289 "data_size": 63488 00:12:00.289 }, 00:12:00.289 { 00:12:00.289 "name": null, 00:12:00.289 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:12:00.289 "is_configured": false, 00:12:00.289 "data_offset": 0, 00:12:00.289 "data_size": 63488 00:12:00.289 }, 00:12:00.289 { 00:12:00.289 "name": null, 00:12:00.289 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:12:00.289 "is_configured": false, 00:12:00.289 "data_offset": 0, 00:12:00.289 "data_size": 63488 00:12:00.289 }, 00:12:00.289 { 00:12:00.289 "name": "BaseBdev4", 00:12:00.289 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:12:00.289 "is_configured": true, 00:12:00.289 "data_offset": 2048, 00:12:00.289 "data_size": 63488 00:12:00.289 } 00:12:00.289 ] 00:12:00.289 }' 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.289 12:37:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 [2024-12-14 12:38:00.239226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.548 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.806 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.806 "name": "Existed_Raid", 00:12:00.806 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:12:00.806 "strip_size_kb": 0, 00:12:00.806 "state": "configuring", 00:12:00.806 "raid_level": "raid1", 00:12:00.806 "superblock": true, 00:12:00.806 "num_base_bdevs": 4, 00:12:00.806 "num_base_bdevs_discovered": 3, 00:12:00.806 "num_base_bdevs_operational": 4, 00:12:00.806 "base_bdevs_list": [ 00:12:00.806 { 00:12:00.806 "name": "BaseBdev1", 00:12:00.806 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:12:00.806 "is_configured": true, 00:12:00.806 "data_offset": 2048, 00:12:00.806 "data_size": 63488 00:12:00.806 }, 00:12:00.806 { 00:12:00.806 "name": null, 00:12:00.806 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:12:00.806 "is_configured": false, 00:12:00.806 "data_offset": 0, 00:12:00.806 "data_size": 63488 00:12:00.806 }, 00:12:00.806 { 00:12:00.806 "name": "BaseBdev3", 00:12:00.806 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:12:00.806 "is_configured": true, 00:12:00.806 "data_offset": 2048, 00:12:00.806 "data_size": 63488 00:12:00.806 }, 00:12:00.806 { 00:12:00.806 "name": "BaseBdev4", 00:12:00.806 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:12:00.806 "is_configured": true, 00:12:00.806 "data_offset": 2048, 00:12:00.806 "data_size": 63488 00:12:00.806 } 00:12:00.806 ] 00:12:00.806 }' 00:12:00.806 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.806 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.067 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.067 [2024-12-14 12:38:00.758396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.326 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.326 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.326 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.327 "name": "Existed_Raid", 00:12:01.327 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:12:01.327 "strip_size_kb": 0, 00:12:01.327 "state": "configuring", 00:12:01.327 "raid_level": "raid1", 00:12:01.327 "superblock": true, 00:12:01.327 "num_base_bdevs": 4, 00:12:01.327 "num_base_bdevs_discovered": 2, 00:12:01.327 "num_base_bdevs_operational": 4, 00:12:01.327 "base_bdevs_list": [ 00:12:01.327 { 00:12:01.327 "name": null, 00:12:01.327 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:12:01.327 "is_configured": false, 00:12:01.327 "data_offset": 0, 00:12:01.327 "data_size": 63488 00:12:01.327 }, 00:12:01.327 { 00:12:01.327 "name": null, 00:12:01.327 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:12:01.327 "is_configured": false, 00:12:01.327 "data_offset": 0, 00:12:01.327 "data_size": 63488 00:12:01.327 }, 00:12:01.327 { 00:12:01.327 "name": "BaseBdev3", 00:12:01.327 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:12:01.327 "is_configured": true, 00:12:01.327 "data_offset": 2048, 00:12:01.327 "data_size": 63488 00:12:01.327 }, 00:12:01.327 { 00:12:01.327 "name": "BaseBdev4", 00:12:01.327 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:12:01.327 "is_configured": true, 00:12:01.327 "data_offset": 2048, 00:12:01.327 "data_size": 63488 00:12:01.327 } 00:12:01.327 ] 00:12:01.327 }' 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.327 12:38:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.586 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.586 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.586 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:01.586 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.586 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.846 [2024-12-14 12:38:01.349105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.846 "name": "Existed_Raid", 00:12:01.846 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:12:01.846 "strip_size_kb": 0, 00:12:01.846 "state": "configuring", 00:12:01.846 "raid_level": "raid1", 00:12:01.846 "superblock": true, 00:12:01.846 "num_base_bdevs": 4, 00:12:01.846 "num_base_bdevs_discovered": 3, 00:12:01.846 "num_base_bdevs_operational": 4, 00:12:01.846 "base_bdevs_list": [ 00:12:01.846 { 00:12:01.846 "name": null, 00:12:01.846 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:12:01.846 "is_configured": false, 00:12:01.846 "data_offset": 0, 00:12:01.846 "data_size": 63488 00:12:01.846 }, 00:12:01.846 { 00:12:01.846 "name": "BaseBdev2", 00:12:01.846 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:12:01.846 "is_configured": true, 00:12:01.846 "data_offset": 2048, 00:12:01.846 "data_size": 63488 00:12:01.846 }, 00:12:01.846 { 00:12:01.846 "name": "BaseBdev3", 00:12:01.846 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:12:01.846 "is_configured": true, 00:12:01.846 "data_offset": 2048, 00:12:01.846 "data_size": 63488 00:12:01.846 }, 00:12:01.846 { 00:12:01.846 "name": "BaseBdev4", 00:12:01.846 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:12:01.846 "is_configured": true, 00:12:01.846 "data_offset": 2048, 00:12:01.846 "data_size": 63488 00:12:01.846 } 00:12:01.846 ] 00:12:01.846 }' 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.846 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.104 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.104 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.104 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.104 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5220b072-5355-4cfd-adea-106bd5c97ea5 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.416 [2024-12-14 12:38:01.941783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:02.416 [2024-12-14 12:38:01.942215] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:02.416 [2024-12-14 12:38:01.942290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:02.416 [2024-12-14 12:38:01.942563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:02.416 [2024-12-14 12:38:01.942767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:02.416 [2024-12-14 12:38:01.942809] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:02.416 NewBaseBdev 00:12:02.416 [2024-12-14 12:38:01.942995] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.416 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.416 [ 00:12:02.416 { 00:12:02.416 "name": "NewBaseBdev", 00:12:02.416 "aliases": [ 00:12:02.416 "5220b072-5355-4cfd-adea-106bd5c97ea5" 00:12:02.416 ], 00:12:02.416 "product_name": "Malloc disk", 00:12:02.416 "block_size": 512, 00:12:02.417 "num_blocks": 65536, 00:12:02.417 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:12:02.417 "assigned_rate_limits": { 00:12:02.417 "rw_ios_per_sec": 0, 00:12:02.417 "rw_mbytes_per_sec": 0, 00:12:02.417 "r_mbytes_per_sec": 0, 00:12:02.417 "w_mbytes_per_sec": 0 00:12:02.417 }, 00:12:02.417 "claimed": true, 00:12:02.417 "claim_type": "exclusive_write", 00:12:02.417 "zoned": false, 00:12:02.417 "supported_io_types": { 00:12:02.417 "read": true, 00:12:02.417 "write": true, 00:12:02.417 "unmap": true, 00:12:02.417 "flush": true, 00:12:02.417 "reset": true, 00:12:02.417 "nvme_admin": false, 00:12:02.417 "nvme_io": false, 00:12:02.417 "nvme_io_md": false, 00:12:02.417 "write_zeroes": true, 00:12:02.417 "zcopy": true, 00:12:02.417 "get_zone_info": false, 00:12:02.417 "zone_management": false, 00:12:02.417 "zone_append": false, 00:12:02.417 "compare": false, 00:12:02.417 "compare_and_write": false, 00:12:02.417 "abort": true, 00:12:02.417 "seek_hole": false, 00:12:02.417 "seek_data": false, 00:12:02.417 "copy": true, 00:12:02.417 "nvme_iov_md": false 00:12:02.417 }, 00:12:02.417 "memory_domains": [ 00:12:02.417 { 00:12:02.417 "dma_device_id": "system", 00:12:02.417 "dma_device_type": 1 00:12:02.417 }, 00:12:02.417 { 00:12:02.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.417 "dma_device_type": 2 00:12:02.417 } 00:12:02.417 ], 00:12:02.417 "driver_specific": {} 00:12:02.417 } 00:12:02.417 ] 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.417 12:38:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.417 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.417 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.417 "name": "Existed_Raid", 00:12:02.417 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:12:02.417 "strip_size_kb": 0, 00:12:02.417 "state": "online", 00:12:02.417 "raid_level": "raid1", 00:12:02.417 "superblock": true, 00:12:02.417 "num_base_bdevs": 4, 00:12:02.417 "num_base_bdevs_discovered": 4, 00:12:02.417 "num_base_bdevs_operational": 4, 00:12:02.417 "base_bdevs_list": [ 00:12:02.417 { 00:12:02.417 "name": "NewBaseBdev", 00:12:02.417 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:12:02.417 "is_configured": true, 00:12:02.417 "data_offset": 2048, 00:12:02.417 "data_size": 63488 00:12:02.417 }, 00:12:02.417 { 00:12:02.417 "name": "BaseBdev2", 00:12:02.417 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:12:02.417 "is_configured": true, 00:12:02.417 "data_offset": 2048, 00:12:02.417 "data_size": 63488 00:12:02.417 }, 00:12:02.417 { 00:12:02.417 "name": "BaseBdev3", 00:12:02.417 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:12:02.417 "is_configured": true, 00:12:02.417 "data_offset": 2048, 00:12:02.417 "data_size": 63488 00:12:02.417 }, 00:12:02.417 { 00:12:02.417 "name": "BaseBdev4", 00:12:02.417 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:12:02.417 "is_configured": true, 00:12:02.417 "data_offset": 2048, 00:12:02.417 "data_size": 63488 00:12:02.417 } 00:12:02.417 ] 00:12:02.417 }' 00:12:02.417 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.417 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:02.983 [2024-12-14 12:38:02.421383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.983 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:02.983 "name": "Existed_Raid", 00:12:02.983 "aliases": [ 00:12:02.983 "2899c7f2-5903-4c45-9539-e8108aa7358f" 00:12:02.983 ], 00:12:02.983 "product_name": "Raid Volume", 00:12:02.983 "block_size": 512, 00:12:02.983 "num_blocks": 63488, 00:12:02.983 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:12:02.983 "assigned_rate_limits": { 00:12:02.983 "rw_ios_per_sec": 0, 00:12:02.983 "rw_mbytes_per_sec": 0, 00:12:02.983 "r_mbytes_per_sec": 0, 00:12:02.983 "w_mbytes_per_sec": 0 00:12:02.983 }, 00:12:02.983 "claimed": false, 00:12:02.983 "zoned": false, 00:12:02.983 "supported_io_types": { 00:12:02.983 "read": true, 00:12:02.983 "write": true, 00:12:02.983 "unmap": false, 00:12:02.983 "flush": false, 00:12:02.983 "reset": true, 00:12:02.983 "nvme_admin": false, 00:12:02.983 "nvme_io": false, 00:12:02.983 "nvme_io_md": false, 00:12:02.983 "write_zeroes": true, 00:12:02.983 "zcopy": false, 00:12:02.983 "get_zone_info": false, 00:12:02.983 "zone_management": false, 00:12:02.983 "zone_append": false, 00:12:02.983 "compare": false, 00:12:02.983 "compare_and_write": false, 00:12:02.983 "abort": false, 00:12:02.983 "seek_hole": false, 00:12:02.983 "seek_data": false, 00:12:02.983 "copy": false, 00:12:02.983 "nvme_iov_md": false 00:12:02.983 }, 00:12:02.983 "memory_domains": [ 00:12:02.983 { 00:12:02.983 "dma_device_id": "system", 00:12:02.983 "dma_device_type": 1 00:12:02.983 }, 00:12:02.983 { 00:12:02.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.983 "dma_device_type": 2 00:12:02.983 }, 00:12:02.983 { 00:12:02.983 "dma_device_id": "system", 00:12:02.983 "dma_device_type": 1 00:12:02.983 }, 00:12:02.983 { 00:12:02.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.984 "dma_device_type": 2 00:12:02.984 }, 00:12:02.984 { 00:12:02.984 "dma_device_id": "system", 00:12:02.984 "dma_device_type": 1 00:12:02.984 }, 00:12:02.984 { 00:12:02.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.984 "dma_device_type": 2 00:12:02.984 }, 00:12:02.984 { 00:12:02.984 "dma_device_id": "system", 00:12:02.984 "dma_device_type": 1 00:12:02.984 }, 00:12:02.984 { 00:12:02.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.984 "dma_device_type": 2 00:12:02.984 } 00:12:02.984 ], 00:12:02.984 "driver_specific": { 00:12:02.984 "raid": { 00:12:02.984 "uuid": "2899c7f2-5903-4c45-9539-e8108aa7358f", 00:12:02.984 "strip_size_kb": 0, 00:12:02.984 "state": "online", 00:12:02.984 "raid_level": "raid1", 00:12:02.984 "superblock": true, 00:12:02.984 "num_base_bdevs": 4, 00:12:02.984 "num_base_bdevs_discovered": 4, 00:12:02.984 "num_base_bdevs_operational": 4, 00:12:02.984 "base_bdevs_list": [ 00:12:02.984 { 00:12:02.984 "name": "NewBaseBdev", 00:12:02.984 "uuid": "5220b072-5355-4cfd-adea-106bd5c97ea5", 00:12:02.984 "is_configured": true, 00:12:02.984 "data_offset": 2048, 00:12:02.984 "data_size": 63488 00:12:02.984 }, 00:12:02.984 { 00:12:02.984 "name": "BaseBdev2", 00:12:02.984 "uuid": "53c05357-71b7-436e-ad9c-f0248a69ba4f", 00:12:02.984 "is_configured": true, 00:12:02.984 "data_offset": 2048, 00:12:02.984 "data_size": 63488 00:12:02.984 }, 00:12:02.984 { 00:12:02.984 "name": "BaseBdev3", 00:12:02.984 "uuid": "35d65095-19b5-4a1b-8bac-9473e14ac2ea", 00:12:02.984 "is_configured": true, 00:12:02.984 "data_offset": 2048, 00:12:02.984 "data_size": 63488 00:12:02.984 }, 00:12:02.984 { 00:12:02.984 "name": "BaseBdev4", 00:12:02.984 "uuid": "52a723a4-c9ce-4b9d-a538-937ad81e91eb", 00:12:02.984 "is_configured": true, 00:12:02.984 "data_offset": 2048, 00:12:02.984 "data_size": 63488 00:12:02.984 } 00:12:02.984 ] 00:12:02.984 } 00:12:02.984 } 00:12:02.984 }' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:02.984 BaseBdev2 00:12:02.984 BaseBdev3 00:12:02.984 BaseBdev4' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.984 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.243 [2024-12-14 12:38:02.740479] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.243 [2024-12-14 12:38:02.740513] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.243 [2024-12-14 12:38:02.740605] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.243 [2024-12-14 12:38:02.740941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.243 [2024-12-14 12:38:02.740958] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 75629 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75629 ']' 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 75629 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75629 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75629' 00:12:03.243 killing process with pid 75629 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 75629 00:12:03.243 [2024-12-14 12:38:02.778728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.243 12:38:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 75629 00:12:03.501 [2024-12-14 12:38:03.186068] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:04.877 12:38:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:04.877 ************************************ 00:12:04.877 END TEST raid_state_function_test_sb 00:12:04.877 ************************************ 00:12:04.877 00:12:04.877 real 0m11.492s 00:12:04.877 user 0m18.200s 00:12:04.877 sys 0m2.005s 00:12:04.877 12:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.877 12:38:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.877 12:38:04 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:04.877 12:38:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:04.877 12:38:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.877 12:38:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:04.877 ************************************ 00:12:04.877 START TEST raid_superblock_test 00:12:04.877 ************************************ 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=76305 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 76305 00:12:04.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 76305 ']' 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.877 12:38:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.877 [2024-12-14 12:38:04.510928] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:04.877 [2024-12-14 12:38:04.511165] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76305 ] 00:12:05.135 [2024-12-14 12:38:04.681757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.135 [2024-12-14 12:38:04.794418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.395 [2024-12-14 12:38:04.998301] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.395 [2024-12-14 12:38:04.998443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.653 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 malloc1 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 [2024-12-14 12:38:05.396727] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:05.913 [2024-12-14 12:38:05.396786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.913 [2024-12-14 12:38:05.396825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:05.913 [2024-12-14 12:38:05.396834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.913 [2024-12-14 12:38:05.399048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.913 [2024-12-14 12:38:05.399095] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:05.913 pt1 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 malloc2 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 [2024-12-14 12:38:05.450553] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:05.913 [2024-12-14 12:38:05.450672] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.913 [2024-12-14 12:38:05.450712] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:05.913 [2024-12-14 12:38:05.450739] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.913 [2024-12-14 12:38:05.452881] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.913 [2024-12-14 12:38:05.452953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:05.913 pt2 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 malloc3 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 [2024-12-14 12:38:05.519258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:05.913 [2024-12-14 12:38:05.519357] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.913 [2024-12-14 12:38:05.519396] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:05.913 [2024-12-14 12:38:05.519428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.913 [2024-12-14 12:38:05.521535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.913 [2024-12-14 12:38:05.521607] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:05.913 pt3 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 malloc4 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.913 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.913 [2024-12-14 12:38:05.574638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:05.913 [2024-12-14 12:38:05.574741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:05.913 [2024-12-14 12:38:05.574782] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:05.913 [2024-12-14 12:38:05.574811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:05.914 [2024-12-14 12:38:05.576917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:05.914 [2024-12-14 12:38:05.576994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:05.914 pt4 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.914 [2024-12-14 12:38:05.586638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:05.914 [2024-12-14 12:38:05.588395] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:05.914 [2024-12-14 12:38:05.588513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:05.914 [2024-12-14 12:38:05.588597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:05.914 [2024-12-14 12:38:05.588823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:05.914 [2024-12-14 12:38:05.588873] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:05.914 [2024-12-14 12:38:05.589139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:05.914 [2024-12-14 12:38:05.589359] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:05.914 [2024-12-14 12:38:05.589407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:05.914 [2024-12-14 12:38:05.589580] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.914 "name": "raid_bdev1", 00:12:05.914 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:05.914 "strip_size_kb": 0, 00:12:05.914 "state": "online", 00:12:05.914 "raid_level": "raid1", 00:12:05.914 "superblock": true, 00:12:05.914 "num_base_bdevs": 4, 00:12:05.914 "num_base_bdevs_discovered": 4, 00:12:05.914 "num_base_bdevs_operational": 4, 00:12:05.914 "base_bdevs_list": [ 00:12:05.914 { 00:12:05.914 "name": "pt1", 00:12:05.914 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:05.914 "is_configured": true, 00:12:05.914 "data_offset": 2048, 00:12:05.914 "data_size": 63488 00:12:05.914 }, 00:12:05.914 { 00:12:05.914 "name": "pt2", 00:12:05.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:05.914 "is_configured": true, 00:12:05.914 "data_offset": 2048, 00:12:05.914 "data_size": 63488 00:12:05.914 }, 00:12:05.914 { 00:12:05.914 "name": "pt3", 00:12:05.914 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:05.914 "is_configured": true, 00:12:05.914 "data_offset": 2048, 00:12:05.914 "data_size": 63488 00:12:05.914 }, 00:12:05.914 { 00:12:05.914 "name": "pt4", 00:12:05.914 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:05.914 "is_configured": true, 00:12:05.914 "data_offset": 2048, 00:12:05.914 "data_size": 63488 00:12:05.914 } 00:12:05.914 ] 00:12:05.914 }' 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.914 12:38:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.481 [2024-12-14 12:38:06.042270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.481 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.481 "name": "raid_bdev1", 00:12:06.481 "aliases": [ 00:12:06.481 "3c67399d-3d25-41aa-aedb-deefd33a1bff" 00:12:06.481 ], 00:12:06.481 "product_name": "Raid Volume", 00:12:06.481 "block_size": 512, 00:12:06.481 "num_blocks": 63488, 00:12:06.481 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:06.481 "assigned_rate_limits": { 00:12:06.481 "rw_ios_per_sec": 0, 00:12:06.481 "rw_mbytes_per_sec": 0, 00:12:06.481 "r_mbytes_per_sec": 0, 00:12:06.481 "w_mbytes_per_sec": 0 00:12:06.481 }, 00:12:06.481 "claimed": false, 00:12:06.481 "zoned": false, 00:12:06.481 "supported_io_types": { 00:12:06.481 "read": true, 00:12:06.481 "write": true, 00:12:06.481 "unmap": false, 00:12:06.481 "flush": false, 00:12:06.481 "reset": true, 00:12:06.481 "nvme_admin": false, 00:12:06.481 "nvme_io": false, 00:12:06.481 "nvme_io_md": false, 00:12:06.481 "write_zeroes": true, 00:12:06.481 "zcopy": false, 00:12:06.481 "get_zone_info": false, 00:12:06.481 "zone_management": false, 00:12:06.481 "zone_append": false, 00:12:06.481 "compare": false, 00:12:06.481 "compare_and_write": false, 00:12:06.481 "abort": false, 00:12:06.481 "seek_hole": false, 00:12:06.481 "seek_data": false, 00:12:06.481 "copy": false, 00:12:06.481 "nvme_iov_md": false 00:12:06.481 }, 00:12:06.481 "memory_domains": [ 00:12:06.481 { 00:12:06.481 "dma_device_id": "system", 00:12:06.481 "dma_device_type": 1 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.481 "dma_device_type": 2 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "dma_device_id": "system", 00:12:06.481 "dma_device_type": 1 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.481 "dma_device_type": 2 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "dma_device_id": "system", 00:12:06.481 "dma_device_type": 1 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.481 "dma_device_type": 2 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "dma_device_id": "system", 00:12:06.481 "dma_device_type": 1 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.481 "dma_device_type": 2 00:12:06.481 } 00:12:06.481 ], 00:12:06.481 "driver_specific": { 00:12:06.481 "raid": { 00:12:06.481 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:06.481 "strip_size_kb": 0, 00:12:06.481 "state": "online", 00:12:06.481 "raid_level": "raid1", 00:12:06.481 "superblock": true, 00:12:06.481 "num_base_bdevs": 4, 00:12:06.481 "num_base_bdevs_discovered": 4, 00:12:06.481 "num_base_bdevs_operational": 4, 00:12:06.481 "base_bdevs_list": [ 00:12:06.481 { 00:12:06.481 "name": "pt1", 00:12:06.481 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.481 "is_configured": true, 00:12:06.481 "data_offset": 2048, 00:12:06.481 "data_size": 63488 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "name": "pt2", 00:12:06.481 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.481 "is_configured": true, 00:12:06.481 "data_offset": 2048, 00:12:06.481 "data_size": 63488 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "name": "pt3", 00:12:06.481 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.481 "is_configured": true, 00:12:06.481 "data_offset": 2048, 00:12:06.481 "data_size": 63488 00:12:06.481 }, 00:12:06.481 { 00:12:06.481 "name": "pt4", 00:12:06.481 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.481 "is_configured": true, 00:12:06.481 "data_offset": 2048, 00:12:06.481 "data_size": 63488 00:12:06.481 } 00:12:06.482 ] 00:12:06.482 } 00:12:06.482 } 00:12:06.482 }' 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:06.482 pt2 00:12:06.482 pt3 00:12:06.482 pt4' 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.482 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:06.741 [2024-12-14 12:38:06.385569] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3c67399d-3d25-41aa-aedb-deefd33a1bff 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3c67399d-3d25-41aa-aedb-deefd33a1bff ']' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 [2024-12-14 12:38:06.433173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:06.741 [2024-12-14 12:38:06.433199] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.741 [2024-12-14 12:38:06.433283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.741 [2024-12-14 12:38:06.433367] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.741 [2024-12-14 12:38:06.433381] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.741 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 [2024-12-14 12:38:06.596934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:07.002 [2024-12-14 12:38:06.598929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:07.002 [2024-12-14 12:38:06.599030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:07.002 [2024-12-14 12:38:06.599116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:07.002 [2024-12-14 12:38:06.599203] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:07.002 [2024-12-14 12:38:06.599298] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:07.002 [2024-12-14 12:38:06.599357] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:07.002 [2024-12-14 12:38:06.599421] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:07.002 [2024-12-14 12:38:06.599471] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.002 [2024-12-14 12:38:06.599506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:07.002 request: 00:12:07.002 { 00:12:07.002 "name": "raid_bdev1", 00:12:07.002 "raid_level": "raid1", 00:12:07.002 "base_bdevs": [ 00:12:07.002 "malloc1", 00:12:07.002 "malloc2", 00:12:07.002 "malloc3", 00:12:07.002 "malloc4" 00:12:07.002 ], 00:12:07.002 "superblock": false, 00:12:07.002 "method": "bdev_raid_create", 00:12:07.002 "req_id": 1 00:12:07.002 } 00:12:07.002 Got JSON-RPC error response 00:12:07.002 response: 00:12:07.002 { 00:12:07.002 "code": -17, 00:12:07.002 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:07.002 } 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 [2024-12-14 12:38:06.664793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:07.002 [2024-12-14 12:38:06.664848] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.002 [2024-12-14 12:38:06.664865] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:07.002 [2024-12-14 12:38:06.664875] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.002 [2024-12-14 12:38:06.667149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.002 [2024-12-14 12:38:06.667232] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:07.002 [2024-12-14 12:38:06.667339] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:07.002 [2024-12-14 12:38:06.667404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:07.002 pt1 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.002 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.002 "name": "raid_bdev1", 00:12:07.002 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:07.002 "strip_size_kb": 0, 00:12:07.002 "state": "configuring", 00:12:07.002 "raid_level": "raid1", 00:12:07.002 "superblock": true, 00:12:07.002 "num_base_bdevs": 4, 00:12:07.002 "num_base_bdevs_discovered": 1, 00:12:07.002 "num_base_bdevs_operational": 4, 00:12:07.002 "base_bdevs_list": [ 00:12:07.002 { 00:12:07.002 "name": "pt1", 00:12:07.002 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.002 "is_configured": true, 00:12:07.002 "data_offset": 2048, 00:12:07.002 "data_size": 63488 00:12:07.002 }, 00:12:07.002 { 00:12:07.002 "name": null, 00:12:07.002 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.002 "is_configured": false, 00:12:07.002 "data_offset": 2048, 00:12:07.002 "data_size": 63488 00:12:07.002 }, 00:12:07.002 { 00:12:07.002 "name": null, 00:12:07.003 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.003 "is_configured": false, 00:12:07.003 "data_offset": 2048, 00:12:07.003 "data_size": 63488 00:12:07.003 }, 00:12:07.003 { 00:12:07.003 "name": null, 00:12:07.003 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.003 "is_configured": false, 00:12:07.003 "data_offset": 2048, 00:12:07.003 "data_size": 63488 00:12:07.003 } 00:12:07.003 ] 00:12:07.003 }' 00:12:07.003 12:38:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.003 12:38:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.571 [2024-12-14 12:38:07.096125] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:07.571 [2024-12-14 12:38:07.096249] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.571 [2024-12-14 12:38:07.096323] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:07.571 [2024-12-14 12:38:07.096360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.571 [2024-12-14 12:38:07.096865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.571 [2024-12-14 12:38:07.096931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:07.571 [2024-12-14 12:38:07.097065] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:07.571 [2024-12-14 12:38:07.097130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:07.571 pt2 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.571 [2024-12-14 12:38:07.108104] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.571 "name": "raid_bdev1", 00:12:07.571 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:07.571 "strip_size_kb": 0, 00:12:07.571 "state": "configuring", 00:12:07.571 "raid_level": "raid1", 00:12:07.571 "superblock": true, 00:12:07.571 "num_base_bdevs": 4, 00:12:07.571 "num_base_bdevs_discovered": 1, 00:12:07.571 "num_base_bdevs_operational": 4, 00:12:07.571 "base_bdevs_list": [ 00:12:07.571 { 00:12:07.571 "name": "pt1", 00:12:07.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.571 "is_configured": true, 00:12:07.571 "data_offset": 2048, 00:12:07.571 "data_size": 63488 00:12:07.571 }, 00:12:07.571 { 00:12:07.571 "name": null, 00:12:07.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.571 "is_configured": false, 00:12:07.571 "data_offset": 0, 00:12:07.571 "data_size": 63488 00:12:07.571 }, 00:12:07.571 { 00:12:07.571 "name": null, 00:12:07.571 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.571 "is_configured": false, 00:12:07.571 "data_offset": 2048, 00:12:07.571 "data_size": 63488 00:12:07.571 }, 00:12:07.571 { 00:12:07.571 "name": null, 00:12:07.571 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.571 "is_configured": false, 00:12:07.571 "data_offset": 2048, 00:12:07.571 "data_size": 63488 00:12:07.571 } 00:12:07.571 ] 00:12:07.571 }' 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.571 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 [2024-12-14 12:38:07.547323] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:07.830 [2024-12-14 12:38:07.547455] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.830 [2024-12-14 12:38:07.547480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:07.830 [2024-12-14 12:38:07.547490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.830 [2024-12-14 12:38:07.547957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.830 [2024-12-14 12:38:07.547977] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:07.830 [2024-12-14 12:38:07.548077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:07.830 [2024-12-14 12:38:07.548100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:07.830 pt2 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.830 [2024-12-14 12:38:07.559261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:07.830 [2024-12-14 12:38:07.559309] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.830 [2024-12-14 12:38:07.559344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:07.830 [2024-12-14 12:38:07.559351] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.830 [2024-12-14 12:38:07.559725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.830 [2024-12-14 12:38:07.559741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:07.830 [2024-12-14 12:38:07.559805] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:07.830 [2024-12-14 12:38:07.559821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:07.830 pt3 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.830 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 [2024-12-14 12:38:07.571232] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:08.089 [2024-12-14 12:38:07.571278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.089 [2024-12-14 12:38:07.571295] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:08.089 [2024-12-14 12:38:07.571303] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.089 [2024-12-14 12:38:07.571674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.089 [2024-12-14 12:38:07.571690] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:08.089 [2024-12-14 12:38:07.571753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:08.089 [2024-12-14 12:38:07.571777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:08.089 [2024-12-14 12:38:07.571923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.089 [2024-12-14 12:38:07.571931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.089 [2024-12-14 12:38:07.572179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:08.089 [2024-12-14 12:38:07.572328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.089 [2024-12-14 12:38:07.572341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:08.089 [2024-12-14 12:38:07.572476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.089 pt4 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.089 "name": "raid_bdev1", 00:12:08.089 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:08.089 "strip_size_kb": 0, 00:12:08.089 "state": "online", 00:12:08.089 "raid_level": "raid1", 00:12:08.089 "superblock": true, 00:12:08.089 "num_base_bdevs": 4, 00:12:08.089 "num_base_bdevs_discovered": 4, 00:12:08.089 "num_base_bdevs_operational": 4, 00:12:08.089 "base_bdevs_list": [ 00:12:08.089 { 00:12:08.089 "name": "pt1", 00:12:08.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.089 "is_configured": true, 00:12:08.089 "data_offset": 2048, 00:12:08.089 "data_size": 63488 00:12:08.089 }, 00:12:08.089 { 00:12:08.089 "name": "pt2", 00:12:08.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.089 "is_configured": true, 00:12:08.089 "data_offset": 2048, 00:12:08.089 "data_size": 63488 00:12:08.089 }, 00:12:08.089 { 00:12:08.089 "name": "pt3", 00:12:08.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.089 "is_configured": true, 00:12:08.089 "data_offset": 2048, 00:12:08.089 "data_size": 63488 00:12:08.089 }, 00:12:08.089 { 00:12:08.089 "name": "pt4", 00:12:08.089 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.089 "is_configured": true, 00:12:08.089 "data_offset": 2048, 00:12:08.089 "data_size": 63488 00:12:08.089 } 00:12:08.089 ] 00:12:08.089 }' 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.089 12:38:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.348 [2024-12-14 12:38:08.014855] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.348 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:08.348 "name": "raid_bdev1", 00:12:08.348 "aliases": [ 00:12:08.348 "3c67399d-3d25-41aa-aedb-deefd33a1bff" 00:12:08.348 ], 00:12:08.348 "product_name": "Raid Volume", 00:12:08.348 "block_size": 512, 00:12:08.348 "num_blocks": 63488, 00:12:08.348 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:08.348 "assigned_rate_limits": { 00:12:08.348 "rw_ios_per_sec": 0, 00:12:08.348 "rw_mbytes_per_sec": 0, 00:12:08.348 "r_mbytes_per_sec": 0, 00:12:08.348 "w_mbytes_per_sec": 0 00:12:08.348 }, 00:12:08.348 "claimed": false, 00:12:08.348 "zoned": false, 00:12:08.348 "supported_io_types": { 00:12:08.348 "read": true, 00:12:08.348 "write": true, 00:12:08.348 "unmap": false, 00:12:08.348 "flush": false, 00:12:08.349 "reset": true, 00:12:08.349 "nvme_admin": false, 00:12:08.349 "nvme_io": false, 00:12:08.349 "nvme_io_md": false, 00:12:08.349 "write_zeroes": true, 00:12:08.349 "zcopy": false, 00:12:08.349 "get_zone_info": false, 00:12:08.349 "zone_management": false, 00:12:08.349 "zone_append": false, 00:12:08.349 "compare": false, 00:12:08.349 "compare_and_write": false, 00:12:08.349 "abort": false, 00:12:08.349 "seek_hole": false, 00:12:08.349 "seek_data": false, 00:12:08.349 "copy": false, 00:12:08.349 "nvme_iov_md": false 00:12:08.349 }, 00:12:08.349 "memory_domains": [ 00:12:08.349 { 00:12:08.349 "dma_device_id": "system", 00:12:08.349 "dma_device_type": 1 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.349 "dma_device_type": 2 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "dma_device_id": "system", 00:12:08.349 "dma_device_type": 1 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.349 "dma_device_type": 2 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "dma_device_id": "system", 00:12:08.349 "dma_device_type": 1 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.349 "dma_device_type": 2 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "dma_device_id": "system", 00:12:08.349 "dma_device_type": 1 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.349 "dma_device_type": 2 00:12:08.349 } 00:12:08.349 ], 00:12:08.349 "driver_specific": { 00:12:08.349 "raid": { 00:12:08.349 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:08.349 "strip_size_kb": 0, 00:12:08.349 "state": "online", 00:12:08.349 "raid_level": "raid1", 00:12:08.349 "superblock": true, 00:12:08.349 "num_base_bdevs": 4, 00:12:08.349 "num_base_bdevs_discovered": 4, 00:12:08.349 "num_base_bdevs_operational": 4, 00:12:08.349 "base_bdevs_list": [ 00:12:08.349 { 00:12:08.349 "name": "pt1", 00:12:08.349 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.349 "is_configured": true, 00:12:08.349 "data_offset": 2048, 00:12:08.349 "data_size": 63488 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "name": "pt2", 00:12:08.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.349 "is_configured": true, 00:12:08.349 "data_offset": 2048, 00:12:08.349 "data_size": 63488 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "name": "pt3", 00:12:08.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.349 "is_configured": true, 00:12:08.349 "data_offset": 2048, 00:12:08.349 "data_size": 63488 00:12:08.349 }, 00:12:08.349 { 00:12:08.349 "name": "pt4", 00:12:08.349 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.349 "is_configured": true, 00:12:08.349 "data_offset": 2048, 00:12:08.349 "data_size": 63488 00:12:08.349 } 00:12:08.349 ] 00:12:08.349 } 00:12:08.349 } 00:12:08.349 }' 00:12:08.349 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:08.608 pt2 00:12:08.608 pt3 00:12:08.608 pt4' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.608 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.870 [2024-12-14 12:38:08.370270] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3c67399d-3d25-41aa-aedb-deefd33a1bff '!=' 3c67399d-3d25-41aa-aedb-deefd33a1bff ']' 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.870 [2024-12-14 12:38:08.401919] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.870 "name": "raid_bdev1", 00:12:08.870 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:08.870 "strip_size_kb": 0, 00:12:08.870 "state": "online", 00:12:08.870 "raid_level": "raid1", 00:12:08.870 "superblock": true, 00:12:08.870 "num_base_bdevs": 4, 00:12:08.870 "num_base_bdevs_discovered": 3, 00:12:08.870 "num_base_bdevs_operational": 3, 00:12:08.870 "base_bdevs_list": [ 00:12:08.870 { 00:12:08.870 "name": null, 00:12:08.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.870 "is_configured": false, 00:12:08.870 "data_offset": 0, 00:12:08.870 "data_size": 63488 00:12:08.870 }, 00:12:08.870 { 00:12:08.870 "name": "pt2", 00:12:08.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.870 "is_configured": true, 00:12:08.870 "data_offset": 2048, 00:12:08.870 "data_size": 63488 00:12:08.870 }, 00:12:08.870 { 00:12:08.870 "name": "pt3", 00:12:08.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.870 "is_configured": true, 00:12:08.870 "data_offset": 2048, 00:12:08.870 "data_size": 63488 00:12:08.870 }, 00:12:08.870 { 00:12:08.870 "name": "pt4", 00:12:08.870 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.870 "is_configured": true, 00:12:08.870 "data_offset": 2048, 00:12:08.870 "data_size": 63488 00:12:08.870 } 00:12:08.870 ] 00:12:08.870 }' 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.870 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.135 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:09.135 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.135 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.135 [2024-12-14 12:38:08.801169] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:09.136 [2024-12-14 12:38:08.801257] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.136 [2024-12-14 12:38:08.801382] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.136 [2024-12-14 12:38:08.801497] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.136 [2024-12-14 12:38:08.801547] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.136 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.395 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.395 [2024-12-14 12:38:08.896990] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:09.395 [2024-12-14 12:38:08.897073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.395 [2024-12-14 12:38:08.897092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:09.395 [2024-12-14 12:38:08.897101] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.395 [2024-12-14 12:38:08.899397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.395 [2024-12-14 12:38:08.899475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:09.395 [2024-12-14 12:38:08.899565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:09.396 [2024-12-14 12:38:08.899621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:09.396 pt2 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.396 "name": "raid_bdev1", 00:12:09.396 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:09.396 "strip_size_kb": 0, 00:12:09.396 "state": "configuring", 00:12:09.396 "raid_level": "raid1", 00:12:09.396 "superblock": true, 00:12:09.396 "num_base_bdevs": 4, 00:12:09.396 "num_base_bdevs_discovered": 1, 00:12:09.396 "num_base_bdevs_operational": 3, 00:12:09.396 "base_bdevs_list": [ 00:12:09.396 { 00:12:09.396 "name": null, 00:12:09.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.396 "is_configured": false, 00:12:09.396 "data_offset": 2048, 00:12:09.396 "data_size": 63488 00:12:09.396 }, 00:12:09.396 { 00:12:09.396 "name": "pt2", 00:12:09.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.396 "is_configured": true, 00:12:09.396 "data_offset": 2048, 00:12:09.396 "data_size": 63488 00:12:09.396 }, 00:12:09.396 { 00:12:09.396 "name": null, 00:12:09.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.396 "is_configured": false, 00:12:09.396 "data_offset": 2048, 00:12:09.396 "data_size": 63488 00:12:09.396 }, 00:12:09.396 { 00:12:09.396 "name": null, 00:12:09.396 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:09.396 "is_configured": false, 00:12:09.396 "data_offset": 2048, 00:12:09.396 "data_size": 63488 00:12:09.396 } 00:12:09.396 ] 00:12:09.396 }' 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.396 12:38:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.655 [2024-12-14 12:38:09.344264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:09.655 [2024-12-14 12:38:09.344388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.655 [2024-12-14 12:38:09.344438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:09.655 [2024-12-14 12:38:09.344469] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.655 [2024-12-14 12:38:09.344968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.655 [2024-12-14 12:38:09.345034] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:09.655 [2024-12-14 12:38:09.345166] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:09.655 [2024-12-14 12:38:09.345216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:09.655 pt3 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.655 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.656 "name": "raid_bdev1", 00:12:09.656 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:09.656 "strip_size_kb": 0, 00:12:09.656 "state": "configuring", 00:12:09.656 "raid_level": "raid1", 00:12:09.656 "superblock": true, 00:12:09.656 "num_base_bdevs": 4, 00:12:09.656 "num_base_bdevs_discovered": 2, 00:12:09.656 "num_base_bdevs_operational": 3, 00:12:09.656 "base_bdevs_list": [ 00:12:09.656 { 00:12:09.656 "name": null, 00:12:09.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.656 "is_configured": false, 00:12:09.656 "data_offset": 2048, 00:12:09.656 "data_size": 63488 00:12:09.656 }, 00:12:09.656 { 00:12:09.656 "name": "pt2", 00:12:09.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.656 "is_configured": true, 00:12:09.656 "data_offset": 2048, 00:12:09.656 "data_size": 63488 00:12:09.656 }, 00:12:09.656 { 00:12:09.656 "name": "pt3", 00:12:09.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.656 "is_configured": true, 00:12:09.656 "data_offset": 2048, 00:12:09.656 "data_size": 63488 00:12:09.656 }, 00:12:09.656 { 00:12:09.656 "name": null, 00:12:09.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:09.656 "is_configured": false, 00:12:09.656 "data_offset": 2048, 00:12:09.656 "data_size": 63488 00:12:09.656 } 00:12:09.656 ] 00:12:09.656 }' 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.656 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.224 [2024-12-14 12:38:09.795532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:10.224 [2024-12-14 12:38:09.795617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.224 [2024-12-14 12:38:09.795643] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:10.224 [2024-12-14 12:38:09.795653] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.224 [2024-12-14 12:38:09.796132] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.224 [2024-12-14 12:38:09.796162] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:10.224 [2024-12-14 12:38:09.796256] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:10.224 [2024-12-14 12:38:09.796282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:10.224 [2024-12-14 12:38:09.796419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:10.224 [2024-12-14 12:38:09.796432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:10.224 [2024-12-14 12:38:09.796678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:10.224 [2024-12-14 12:38:09.796843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:10.224 [2024-12-14 12:38:09.796856] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:10.224 [2024-12-14 12:38:09.796993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:10.224 pt4 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.224 "name": "raid_bdev1", 00:12:10.224 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:10.224 "strip_size_kb": 0, 00:12:10.224 "state": "online", 00:12:10.224 "raid_level": "raid1", 00:12:10.224 "superblock": true, 00:12:10.224 "num_base_bdevs": 4, 00:12:10.224 "num_base_bdevs_discovered": 3, 00:12:10.224 "num_base_bdevs_operational": 3, 00:12:10.224 "base_bdevs_list": [ 00:12:10.224 { 00:12:10.224 "name": null, 00:12:10.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.224 "is_configured": false, 00:12:10.224 "data_offset": 2048, 00:12:10.224 "data_size": 63488 00:12:10.224 }, 00:12:10.224 { 00:12:10.224 "name": "pt2", 00:12:10.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.224 "is_configured": true, 00:12:10.224 "data_offset": 2048, 00:12:10.224 "data_size": 63488 00:12:10.224 }, 00:12:10.224 { 00:12:10.224 "name": "pt3", 00:12:10.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.224 "is_configured": true, 00:12:10.224 "data_offset": 2048, 00:12:10.224 "data_size": 63488 00:12:10.224 }, 00:12:10.224 { 00:12:10.224 "name": "pt4", 00:12:10.224 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:10.224 "is_configured": true, 00:12:10.224 "data_offset": 2048, 00:12:10.224 "data_size": 63488 00:12:10.224 } 00:12:10.224 ] 00:12:10.224 }' 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.224 12:38:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 [2024-12-14 12:38:10.230752] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.793 [2024-12-14 12:38:10.230843] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:10.793 [2024-12-14 12:38:10.230945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:10.793 [2024-12-14 12:38:10.231052] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:10.793 [2024-12-14 12:38:10.231118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 [2024-12-14 12:38:10.302605] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:10.793 [2024-12-14 12:38:10.302758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:10.793 [2024-12-14 12:38:10.302807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:10.793 [2024-12-14 12:38:10.302841] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:10.793 [2024-12-14 12:38:10.304995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:10.793 [2024-12-14 12:38:10.305084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:10.793 [2024-12-14 12:38:10.305217] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:10.793 [2024-12-14 12:38:10.305296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:10.793 [2024-12-14 12:38:10.305476] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:10.793 [2024-12-14 12:38:10.305536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:10.793 [2024-12-14 12:38:10.305572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:10.793 [2024-12-14 12:38:10.305660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:10.793 [2024-12-14 12:38:10.305787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:10.793 pt1 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.793 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.793 "name": "raid_bdev1", 00:12:10.793 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:10.793 "strip_size_kb": 0, 00:12:10.793 "state": "configuring", 00:12:10.793 "raid_level": "raid1", 00:12:10.793 "superblock": true, 00:12:10.794 "num_base_bdevs": 4, 00:12:10.794 "num_base_bdevs_discovered": 2, 00:12:10.794 "num_base_bdevs_operational": 3, 00:12:10.794 "base_bdevs_list": [ 00:12:10.794 { 00:12:10.794 "name": null, 00:12:10.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.794 "is_configured": false, 00:12:10.794 "data_offset": 2048, 00:12:10.794 "data_size": 63488 00:12:10.794 }, 00:12:10.794 { 00:12:10.794 "name": "pt2", 00:12:10.794 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:10.794 "is_configured": true, 00:12:10.794 "data_offset": 2048, 00:12:10.794 "data_size": 63488 00:12:10.794 }, 00:12:10.794 { 00:12:10.794 "name": "pt3", 00:12:10.794 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:10.794 "is_configured": true, 00:12:10.794 "data_offset": 2048, 00:12:10.794 "data_size": 63488 00:12:10.794 }, 00:12:10.794 { 00:12:10.794 "name": null, 00:12:10.794 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:10.794 "is_configured": false, 00:12:10.794 "data_offset": 2048, 00:12:10.794 "data_size": 63488 00:12:10.794 } 00:12:10.794 ] 00:12:10.794 }' 00:12:10.794 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.794 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.053 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.053 [2024-12-14 12:38:10.777849] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:11.053 [2024-12-14 12:38:10.777917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:11.053 [2024-12-14 12:38:10.777940] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:11.053 [2024-12-14 12:38:10.777950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:11.053 [2024-12-14 12:38:10.778464] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:11.053 [2024-12-14 12:38:10.778489] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:11.053 [2024-12-14 12:38:10.778579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:11.053 [2024-12-14 12:38:10.778604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:11.053 [2024-12-14 12:38:10.778757] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:11.053 [2024-12-14 12:38:10.778766] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:11.054 [2024-12-14 12:38:10.779040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:11.054 [2024-12-14 12:38:10.779210] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:11.054 [2024-12-14 12:38:10.779229] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:11.054 [2024-12-14 12:38:10.779384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.054 pt4 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.054 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.313 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.313 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.313 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.313 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.313 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.313 "name": "raid_bdev1", 00:12:11.313 "uuid": "3c67399d-3d25-41aa-aedb-deefd33a1bff", 00:12:11.313 "strip_size_kb": 0, 00:12:11.313 "state": "online", 00:12:11.313 "raid_level": "raid1", 00:12:11.313 "superblock": true, 00:12:11.313 "num_base_bdevs": 4, 00:12:11.313 "num_base_bdevs_discovered": 3, 00:12:11.313 "num_base_bdevs_operational": 3, 00:12:11.313 "base_bdevs_list": [ 00:12:11.313 { 00:12:11.313 "name": null, 00:12:11.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.313 "is_configured": false, 00:12:11.313 "data_offset": 2048, 00:12:11.313 "data_size": 63488 00:12:11.313 }, 00:12:11.313 { 00:12:11.313 "name": "pt2", 00:12:11.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:11.313 "is_configured": true, 00:12:11.313 "data_offset": 2048, 00:12:11.313 "data_size": 63488 00:12:11.313 }, 00:12:11.313 { 00:12:11.313 "name": "pt3", 00:12:11.313 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:11.313 "is_configured": true, 00:12:11.313 "data_offset": 2048, 00:12:11.313 "data_size": 63488 00:12:11.313 }, 00:12:11.313 { 00:12:11.313 "name": "pt4", 00:12:11.313 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:11.313 "is_configured": true, 00:12:11.313 "data_offset": 2048, 00:12:11.313 "data_size": 63488 00:12:11.313 } 00:12:11.313 ] 00:12:11.313 }' 00:12:11.313 12:38:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.313 12:38:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.573 [2024-12-14 12:38:11.281349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 3c67399d-3d25-41aa-aedb-deefd33a1bff '!=' 3c67399d-3d25-41aa-aedb-deefd33a1bff ']' 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 76305 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 76305 ']' 00:12:11.573 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 76305 00:12:11.832 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:11.832 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.832 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76305 00:12:11.832 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.832 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.832 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76305' 00:12:11.832 killing process with pid 76305 00:12:11.832 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 76305 00:12:11.832 [2024-12-14 12:38:11.352388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.832 [2024-12-14 12:38:11.352539] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.832 12:38:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 76305 00:12:11.832 [2024-12-14 12:38:11.352649] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.832 [2024-12-14 12:38:11.352664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:12.090 [2024-12-14 12:38:11.751391] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:13.464 12:38:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:13.464 ************************************ 00:12:13.464 END TEST raid_superblock_test 00:12:13.464 ************************************ 00:12:13.464 00:12:13.464 real 0m8.464s 00:12:13.464 user 0m13.349s 00:12:13.464 sys 0m1.469s 00:12:13.464 12:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:13.464 12:38:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.465 12:38:12 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:13.465 12:38:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:13.465 12:38:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:13.465 12:38:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:13.465 ************************************ 00:12:13.465 START TEST raid_read_error_test 00:12:13.465 ************************************ 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.6jV2OrZ3cj 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76792 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76792 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 76792 ']' 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:13.465 12:38:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.465 [2024-12-14 12:38:13.048921] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:13.465 [2024-12-14 12:38:13.049032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76792 ] 00:12:13.723 [2024-12-14 12:38:13.202312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.723 [2024-12-14 12:38:13.344604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:13.982 [2024-12-14 12:38:13.547726] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:13.982 [2024-12-14 12:38:13.547761] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:14.241 12:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.241 12:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:14.241 12:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.241 12:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:14.241 12:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.241 12:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.500 BaseBdev1_malloc 00:12:14.500 12:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.500 12:38:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:14.500 12:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.500 12:38:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.500 true 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.500 [2024-12-14 12:38:14.010362] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:14.500 [2024-12-14 12:38:14.010420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.500 [2024-12-14 12:38:14.010441] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:14.500 [2024-12-14 12:38:14.010452] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.500 [2024-12-14 12:38:14.012584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.500 [2024-12-14 12:38:14.012628] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:14.500 BaseBdev1 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.500 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.500 BaseBdev2_malloc 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 true 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 [2024-12-14 12:38:14.074021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:14.501 [2024-12-14 12:38:14.074085] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.501 [2024-12-14 12:38:14.074100] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:14.501 [2024-12-14 12:38:14.074111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.501 [2024-12-14 12:38:14.076357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.501 [2024-12-14 12:38:14.076397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:14.501 BaseBdev2 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 BaseBdev3_malloc 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 true 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 [2024-12-14 12:38:14.158292] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:14.501 [2024-12-14 12:38:14.158347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.501 [2024-12-14 12:38:14.158365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:14.501 [2024-12-14 12:38:14.158376] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.501 [2024-12-14 12:38:14.160557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.501 [2024-12-14 12:38:14.160597] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:14.501 BaseBdev3 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 BaseBdev4_malloc 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 true 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 [2024-12-14 12:38:14.223460] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:14.501 [2024-12-14 12:38:14.223518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.501 [2024-12-14 12:38:14.223536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:14.501 [2024-12-14 12:38:14.223547] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.501 [2024-12-14 12:38:14.225849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.501 [2024-12-14 12:38:14.225892] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:14.501 BaseBdev4 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.501 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.501 [2024-12-14 12:38:14.235508] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:14.761 [2024-12-14 12:38:14.237516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.761 [2024-12-14 12:38:14.237600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.761 [2024-12-14 12:38:14.237670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:14.761 [2024-12-14 12:38:14.237918] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:14.761 [2024-12-14 12:38:14.237938] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.761 [2024-12-14 12:38:14.238238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:14.761 [2024-12-14 12:38:14.238447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:14.761 [2024-12-14 12:38:14.238467] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:14.761 [2024-12-14 12:38:14.238649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.761 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.761 "name": "raid_bdev1", 00:12:14.761 "uuid": "aa2107f7-6528-4e17-8a39-032ece92d5ae", 00:12:14.761 "strip_size_kb": 0, 00:12:14.761 "state": "online", 00:12:14.761 "raid_level": "raid1", 00:12:14.761 "superblock": true, 00:12:14.761 "num_base_bdevs": 4, 00:12:14.761 "num_base_bdevs_discovered": 4, 00:12:14.761 "num_base_bdevs_operational": 4, 00:12:14.761 "base_bdevs_list": [ 00:12:14.761 { 00:12:14.761 "name": "BaseBdev1", 00:12:14.762 "uuid": "fda4545f-ab50-55c3-96b8-f7826c6d4310", 00:12:14.762 "is_configured": true, 00:12:14.762 "data_offset": 2048, 00:12:14.762 "data_size": 63488 00:12:14.762 }, 00:12:14.762 { 00:12:14.762 "name": "BaseBdev2", 00:12:14.762 "uuid": "6d572a9f-cece-5c72-9c2a-1a17372aa6e7", 00:12:14.762 "is_configured": true, 00:12:14.762 "data_offset": 2048, 00:12:14.762 "data_size": 63488 00:12:14.762 }, 00:12:14.762 { 00:12:14.762 "name": "BaseBdev3", 00:12:14.762 "uuid": "8b08611a-08be-50e7-b449-3757c9925465", 00:12:14.762 "is_configured": true, 00:12:14.762 "data_offset": 2048, 00:12:14.762 "data_size": 63488 00:12:14.762 }, 00:12:14.762 { 00:12:14.762 "name": "BaseBdev4", 00:12:14.762 "uuid": "dfa378e1-9ff1-52e4-9ab9-4b7baa3193bb", 00:12:14.762 "is_configured": true, 00:12:14.762 "data_offset": 2048, 00:12:14.762 "data_size": 63488 00:12:14.762 } 00:12:14.762 ] 00:12:14.762 }' 00:12:14.762 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.762 12:38:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.021 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:15.021 12:38:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:15.281 [2024-12-14 12:38:14.768004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.220 "name": "raid_bdev1", 00:12:16.220 "uuid": "aa2107f7-6528-4e17-8a39-032ece92d5ae", 00:12:16.220 "strip_size_kb": 0, 00:12:16.220 "state": "online", 00:12:16.220 "raid_level": "raid1", 00:12:16.220 "superblock": true, 00:12:16.220 "num_base_bdevs": 4, 00:12:16.220 "num_base_bdevs_discovered": 4, 00:12:16.220 "num_base_bdevs_operational": 4, 00:12:16.220 "base_bdevs_list": [ 00:12:16.220 { 00:12:16.220 "name": "BaseBdev1", 00:12:16.220 "uuid": "fda4545f-ab50-55c3-96b8-f7826c6d4310", 00:12:16.220 "is_configured": true, 00:12:16.220 "data_offset": 2048, 00:12:16.220 "data_size": 63488 00:12:16.220 }, 00:12:16.220 { 00:12:16.220 "name": "BaseBdev2", 00:12:16.220 "uuid": "6d572a9f-cece-5c72-9c2a-1a17372aa6e7", 00:12:16.220 "is_configured": true, 00:12:16.220 "data_offset": 2048, 00:12:16.220 "data_size": 63488 00:12:16.220 }, 00:12:16.220 { 00:12:16.220 "name": "BaseBdev3", 00:12:16.220 "uuid": "8b08611a-08be-50e7-b449-3757c9925465", 00:12:16.220 "is_configured": true, 00:12:16.220 "data_offset": 2048, 00:12:16.220 "data_size": 63488 00:12:16.220 }, 00:12:16.220 { 00:12:16.220 "name": "BaseBdev4", 00:12:16.220 "uuid": "dfa378e1-9ff1-52e4-9ab9-4b7baa3193bb", 00:12:16.220 "is_configured": true, 00:12:16.220 "data_offset": 2048, 00:12:16.220 "data_size": 63488 00:12:16.220 } 00:12:16.220 ] 00:12:16.220 }' 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.220 12:38:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.479 12:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.479 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.479 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.479 [2024-12-14 12:38:16.184629] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.480 [2024-12-14 12:38:16.184732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.480 [2024-12-14 12:38:16.187586] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.480 [2024-12-14 12:38:16.187649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.480 [2024-12-14 12:38:16.187768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.480 [2024-12-14 12:38:16.187781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:16.480 { 00:12:16.480 "results": [ 00:12:16.480 { 00:12:16.480 "job": "raid_bdev1", 00:12:16.480 "core_mask": "0x1", 00:12:16.480 "workload": "randrw", 00:12:16.480 "percentage": 50, 00:12:16.480 "status": "finished", 00:12:16.480 "queue_depth": 1, 00:12:16.480 "io_size": 131072, 00:12:16.480 "runtime": 1.417357, 00:12:16.480 "iops": 10399.638199832505, 00:12:16.480 "mibps": 1299.9547749790631, 00:12:16.480 "io_failed": 0, 00:12:16.480 "io_timeout": 0, 00:12:16.480 "avg_latency_us": 93.37484834659573, 00:12:16.480 "min_latency_us": 24.482096069868994, 00:12:16.480 "max_latency_us": 1452.380786026201 00:12:16.480 } 00:12:16.480 ], 00:12:16.480 "core_count": 1 00:12:16.480 } 00:12:16.480 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.480 12:38:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76792 00:12:16.480 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 76792 ']' 00:12:16.480 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 76792 00:12:16.480 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:16.480 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.480 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76792 00:12:16.786 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.786 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.786 killing process with pid 76792 00:12:16.786 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76792' 00:12:16.786 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 76792 00:12:16.786 [2024-12-14 12:38:16.234548] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:16.786 12:38:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 76792 00:12:17.062 [2024-12-14 12:38:16.566743] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.6jV2OrZ3cj 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:18.442 00:12:18.442 real 0m4.831s 00:12:18.442 user 0m5.762s 00:12:18.442 sys 0m0.585s 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.442 12:38:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.442 ************************************ 00:12:18.442 END TEST raid_read_error_test 00:12:18.442 ************************************ 00:12:18.442 12:38:17 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:18.442 12:38:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:18.442 12:38:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.442 12:38:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.442 ************************************ 00:12:18.442 START TEST raid_write_error_test 00:12:18.442 ************************************ 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.59YYkYoHJe 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76938 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76938 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76938 ']' 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.442 12:38:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.442 [2024-12-14 12:38:17.946962] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:18.442 [2024-12-14 12:38:17.947169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76938 ] 00:12:18.442 [2024-12-14 12:38:18.119083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.702 [2024-12-14 12:38:18.234495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.961 [2024-12-14 12:38:18.442581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.961 [2024-12-14 12:38:18.442615] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.220 BaseBdev1_malloc 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:19.220 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.221 true 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.221 [2024-12-14 12:38:18.853442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:19.221 [2024-12-14 12:38:18.853498] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.221 [2024-12-14 12:38:18.853533] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:19.221 [2024-12-14 12:38:18.853544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.221 [2024-12-14 12:38:18.855743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.221 [2024-12-14 12:38:18.855846] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.221 BaseBdev1 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.221 BaseBdev2_malloc 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.221 true 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.221 [2024-12-14 12:38:18.921024] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:19.221 [2024-12-14 12:38:18.921167] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.221 [2024-12-14 12:38:18.921192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:19.221 [2024-12-14 12:38:18.921205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.221 [2024-12-14 12:38:18.923472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.221 [2024-12-14 12:38:18.923511] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:19.221 BaseBdev2 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.221 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.481 BaseBdev3_malloc 00:12:19.481 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.481 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:19.481 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.481 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.481 true 00:12:19.481 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.481 12:38:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:19.481 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.481 12:38:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.481 [2024-12-14 12:38:19.002327] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:19.481 [2024-12-14 12:38:19.002379] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.481 [2024-12-14 12:38:19.002397] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:19.481 [2024-12-14 12:38:19.002407] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.481 [2024-12-14 12:38:19.004620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.481 [2024-12-14 12:38:19.004676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:19.481 BaseBdev3 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.481 BaseBdev4_malloc 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.481 true 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.481 [2024-12-14 12:38:19.071268] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:19.481 [2024-12-14 12:38:19.071383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.481 [2024-12-14 12:38:19.071408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:19.481 [2024-12-14 12:38:19.071422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.481 [2024-12-14 12:38:19.073624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.481 [2024-12-14 12:38:19.073665] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:19.481 BaseBdev4 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.481 [2024-12-14 12:38:19.083290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.481 [2024-12-14 12:38:19.085106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.481 [2024-12-14 12:38:19.085246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:19.481 [2024-12-14 12:38:19.085319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:19.481 [2024-12-14 12:38:19.085547] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:19.481 [2024-12-14 12:38:19.085564] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:19.481 [2024-12-14 12:38:19.085803] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:19.481 [2024-12-14 12:38:19.085970] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:19.481 [2024-12-14 12:38:19.085979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:19.481 [2024-12-14 12:38:19.086140] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.481 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.482 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.482 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.482 "name": "raid_bdev1", 00:12:19.482 "uuid": "425dddff-5436-424f-bcb0-d1c4927f0e1e", 00:12:19.482 "strip_size_kb": 0, 00:12:19.482 "state": "online", 00:12:19.482 "raid_level": "raid1", 00:12:19.482 "superblock": true, 00:12:19.482 "num_base_bdevs": 4, 00:12:19.482 "num_base_bdevs_discovered": 4, 00:12:19.482 "num_base_bdevs_operational": 4, 00:12:19.482 "base_bdevs_list": [ 00:12:19.482 { 00:12:19.482 "name": "BaseBdev1", 00:12:19.482 "uuid": "287f80c0-1811-58e1-a9a7-e843a3824d53", 00:12:19.482 "is_configured": true, 00:12:19.482 "data_offset": 2048, 00:12:19.482 "data_size": 63488 00:12:19.482 }, 00:12:19.482 { 00:12:19.482 "name": "BaseBdev2", 00:12:19.482 "uuid": "21c1600a-4fed-5844-9f54-34e60915ae9a", 00:12:19.482 "is_configured": true, 00:12:19.482 "data_offset": 2048, 00:12:19.482 "data_size": 63488 00:12:19.482 }, 00:12:19.482 { 00:12:19.482 "name": "BaseBdev3", 00:12:19.482 "uuid": "69774420-5f9d-522b-8a5b-7fc32ea923da", 00:12:19.482 "is_configured": true, 00:12:19.482 "data_offset": 2048, 00:12:19.482 "data_size": 63488 00:12:19.482 }, 00:12:19.482 { 00:12:19.482 "name": "BaseBdev4", 00:12:19.482 "uuid": "57b4ffa9-a5e4-529e-8d4e-7c0df2189863", 00:12:19.482 "is_configured": true, 00:12:19.482 "data_offset": 2048, 00:12:19.482 "data_size": 63488 00:12:19.482 } 00:12:19.482 ] 00:12:19.482 }' 00:12:19.482 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.482 12:38:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.050 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:20.050 12:38:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:20.050 [2024-12-14 12:38:19.663622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.989 [2024-12-14 12:38:20.570335] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:20.989 [2024-12-14 12:38:20.570501] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.989 [2024-12-14 12:38:20.570784] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.989 "name": "raid_bdev1", 00:12:20.989 "uuid": "425dddff-5436-424f-bcb0-d1c4927f0e1e", 00:12:20.989 "strip_size_kb": 0, 00:12:20.989 "state": "online", 00:12:20.989 "raid_level": "raid1", 00:12:20.989 "superblock": true, 00:12:20.989 "num_base_bdevs": 4, 00:12:20.989 "num_base_bdevs_discovered": 3, 00:12:20.989 "num_base_bdevs_operational": 3, 00:12:20.989 "base_bdevs_list": [ 00:12:20.989 { 00:12:20.989 "name": null, 00:12:20.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.989 "is_configured": false, 00:12:20.989 "data_offset": 0, 00:12:20.989 "data_size": 63488 00:12:20.989 }, 00:12:20.989 { 00:12:20.989 "name": "BaseBdev2", 00:12:20.989 "uuid": "21c1600a-4fed-5844-9f54-34e60915ae9a", 00:12:20.989 "is_configured": true, 00:12:20.989 "data_offset": 2048, 00:12:20.989 "data_size": 63488 00:12:20.989 }, 00:12:20.989 { 00:12:20.989 "name": "BaseBdev3", 00:12:20.989 "uuid": "69774420-5f9d-522b-8a5b-7fc32ea923da", 00:12:20.989 "is_configured": true, 00:12:20.989 "data_offset": 2048, 00:12:20.989 "data_size": 63488 00:12:20.989 }, 00:12:20.989 { 00:12:20.989 "name": "BaseBdev4", 00:12:20.989 "uuid": "57b4ffa9-a5e4-529e-8d4e-7c0df2189863", 00:12:20.989 "is_configured": true, 00:12:20.989 "data_offset": 2048, 00:12:20.989 "data_size": 63488 00:12:20.989 } 00:12:20.989 ] 00:12:20.989 }' 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.989 12:38:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.559 [2024-12-14 12:38:21.025943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.559 [2024-12-14 12:38:21.025980] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.559 [2024-12-14 12:38:21.028662] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.559 [2024-12-14 12:38:21.028709] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.559 [2024-12-14 12:38:21.028810] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.559 [2024-12-14 12:38:21.028822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:21.559 { 00:12:21.559 "results": [ 00:12:21.559 { 00:12:21.559 "job": "raid_bdev1", 00:12:21.559 "core_mask": "0x1", 00:12:21.559 "workload": "randrw", 00:12:21.559 "percentage": 50, 00:12:21.559 "status": "finished", 00:12:21.559 "queue_depth": 1, 00:12:21.559 "io_size": 131072, 00:12:21.559 "runtime": 1.363053, 00:12:21.559 "iops": 11507.98978469656, 00:12:21.559 "mibps": 1438.49872308707, 00:12:21.559 "io_failed": 0, 00:12:21.559 "io_timeout": 0, 00:12:21.559 "avg_latency_us": 84.1774995865921, 00:12:21.559 "min_latency_us": 23.811353711790392, 00:12:21.559 "max_latency_us": 1345.0620087336245 00:12:21.559 } 00:12:21.559 ], 00:12:21.559 "core_count": 1 00:12:21.559 } 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76938 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76938 ']' 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76938 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76938 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:21.559 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76938' 00:12:21.559 killing process with pid 76938 00:12:21.560 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76938 00:12:21.560 [2024-12-14 12:38:21.075359] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:21.560 12:38:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76938 00:12:21.819 [2024-12-14 12:38:21.395535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.59YYkYoHJe 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:23.200 00:12:23.200 real 0m4.732s 00:12:23.200 user 0m5.615s 00:12:23.200 sys 0m0.584s 00:12:23.200 ************************************ 00:12:23.200 END TEST raid_write_error_test 00:12:23.200 ************************************ 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:23.200 12:38:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.200 12:38:22 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:23.200 12:38:22 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:23.200 12:38:22 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:23.200 12:38:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:23.200 12:38:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:23.200 12:38:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:23.200 ************************************ 00:12:23.200 START TEST raid_rebuild_test 00:12:23.200 ************************************ 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77076 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77076 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77076 ']' 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.200 12:38:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.200 [2024-12-14 12:38:22.745214] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:23.200 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:23.200 Zero copy mechanism will not be used. 00:12:23.200 [2024-12-14 12:38:22.745375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77076 ] 00:12:23.200 [2024-12-14 12:38:22.917562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.460 [2024-12-14 12:38:23.031979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.719 [2024-12-14 12:38:23.231209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.719 [2024-12-14 12:38:23.231364] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.979 BaseBdev1_malloc 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.979 [2024-12-14 12:38:23.649256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:23.979 [2024-12-14 12:38:23.649319] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.979 [2024-12-14 12:38:23.649342] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:23.979 [2024-12-14 12:38:23.649353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.979 [2024-12-14 12:38:23.651678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.979 [2024-12-14 12:38:23.651719] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:23.979 BaseBdev1 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.979 BaseBdev2_malloc 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.979 [2024-12-14 12:38:23.702111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:23.979 [2024-12-14 12:38:23.702169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.979 [2024-12-14 12:38:23.702203] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:23.979 [2024-12-14 12:38:23.702215] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.979 [2024-12-14 12:38:23.704294] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.979 [2024-12-14 12:38:23.704331] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:23.979 BaseBdev2 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.979 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.238 spare_malloc 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.239 spare_delay 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.239 [2024-12-14 12:38:23.782111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:24.239 [2024-12-14 12:38:23.782165] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.239 [2024-12-14 12:38:23.782184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:24.239 [2024-12-14 12:38:23.782195] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.239 [2024-12-14 12:38:23.784325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.239 [2024-12-14 12:38:23.784437] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:24.239 spare 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.239 [2024-12-14 12:38:23.794173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:24.239 [2024-12-14 12:38:23.796012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.239 [2024-12-14 12:38:23.796131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:24.239 [2024-12-14 12:38:23.796147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:24.239 [2024-12-14 12:38:23.796448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:24.239 [2024-12-14 12:38:23.796606] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:24.239 [2024-12-14 12:38:23.796617] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:24.239 [2024-12-14 12:38:23.796790] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.239 "name": "raid_bdev1", 00:12:24.239 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:24.239 "strip_size_kb": 0, 00:12:24.239 "state": "online", 00:12:24.239 "raid_level": "raid1", 00:12:24.239 "superblock": false, 00:12:24.239 "num_base_bdevs": 2, 00:12:24.239 "num_base_bdevs_discovered": 2, 00:12:24.239 "num_base_bdevs_operational": 2, 00:12:24.239 "base_bdevs_list": [ 00:12:24.239 { 00:12:24.239 "name": "BaseBdev1", 00:12:24.239 "uuid": "2cdd08c8-31de-554c-b3f5-962878bd4b7d", 00:12:24.239 "is_configured": true, 00:12:24.239 "data_offset": 0, 00:12:24.239 "data_size": 65536 00:12:24.239 }, 00:12:24.239 { 00:12:24.239 "name": "BaseBdev2", 00:12:24.239 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:24.239 "is_configured": true, 00:12:24.239 "data_offset": 0, 00:12:24.239 "data_size": 65536 00:12:24.239 } 00:12:24.239 ] 00:12:24.239 }' 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.239 12:38:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:24.808 [2024-12-14 12:38:24.249606] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:24.808 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:24.808 [2024-12-14 12:38:24.492967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:24.808 /dev/nbd0 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:24.809 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:24.809 1+0 records in 00:12:24.809 1+0 records out 00:12:24.809 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349331 s, 11.7 MB/s 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:25.068 12:38:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:29.261 65536+0 records in 00:12:29.261 65536+0 records out 00:12:29.261 33554432 bytes (34 MB, 32 MiB) copied, 4.27563 s, 7.8 MB/s 00:12:29.261 12:38:28 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:29.261 12:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.261 12:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:29.261 12:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.261 12:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:29.261 12:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.261 12:38:28 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:29.520 [2024-12-14 12:38:29.045149] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.520 [2024-12-14 12:38:29.081220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.520 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.521 "name": "raid_bdev1", 00:12:29.521 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:29.521 "strip_size_kb": 0, 00:12:29.521 "state": "online", 00:12:29.521 "raid_level": "raid1", 00:12:29.521 "superblock": false, 00:12:29.521 "num_base_bdevs": 2, 00:12:29.521 "num_base_bdevs_discovered": 1, 00:12:29.521 "num_base_bdevs_operational": 1, 00:12:29.521 "base_bdevs_list": [ 00:12:29.521 { 00:12:29.521 "name": null, 00:12:29.521 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.521 "is_configured": false, 00:12:29.521 "data_offset": 0, 00:12:29.521 "data_size": 65536 00:12:29.521 }, 00:12:29.521 { 00:12:29.521 "name": "BaseBdev2", 00:12:29.521 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:29.521 "is_configured": true, 00:12:29.521 "data_offset": 0, 00:12:29.521 "data_size": 65536 00:12:29.521 } 00:12:29.521 ] 00:12:29.521 }' 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.521 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.090 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:30.090 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.090 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.090 [2024-12-14 12:38:29.552453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:30.090 [2024-12-14 12:38:29.570955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:30.090 12:38:29 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.090 12:38:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:30.090 [2024-12-14 12:38:29.573194] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.030 "name": "raid_bdev1", 00:12:31.030 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:31.030 "strip_size_kb": 0, 00:12:31.030 "state": "online", 00:12:31.030 "raid_level": "raid1", 00:12:31.030 "superblock": false, 00:12:31.030 "num_base_bdevs": 2, 00:12:31.030 "num_base_bdevs_discovered": 2, 00:12:31.030 "num_base_bdevs_operational": 2, 00:12:31.030 "process": { 00:12:31.030 "type": "rebuild", 00:12:31.030 "target": "spare", 00:12:31.030 "progress": { 00:12:31.030 "blocks": 20480, 00:12:31.030 "percent": 31 00:12:31.030 } 00:12:31.030 }, 00:12:31.030 "base_bdevs_list": [ 00:12:31.030 { 00:12:31.030 "name": "spare", 00:12:31.030 "uuid": "bfe5b407-1efa-5abc-b6c0-ba8ab48028e0", 00:12:31.030 "is_configured": true, 00:12:31.030 "data_offset": 0, 00:12:31.030 "data_size": 65536 00:12:31.030 }, 00:12:31.030 { 00:12:31.030 "name": "BaseBdev2", 00:12:31.030 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:31.030 "is_configured": true, 00:12:31.030 "data_offset": 0, 00:12:31.030 "data_size": 65536 00:12:31.030 } 00:12:31.030 ] 00:12:31.030 }' 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.030 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.030 [2024-12-14 12:38:30.735869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.293 [2024-12-14 12:38:30.778758] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:31.293 [2024-12-14 12:38:30.778849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.293 [2024-12-14 12:38:30.778864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:31.293 [2024-12-14 12:38:30.778874] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.293 "name": "raid_bdev1", 00:12:31.293 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:31.293 "strip_size_kb": 0, 00:12:31.293 "state": "online", 00:12:31.293 "raid_level": "raid1", 00:12:31.293 "superblock": false, 00:12:31.293 "num_base_bdevs": 2, 00:12:31.293 "num_base_bdevs_discovered": 1, 00:12:31.293 "num_base_bdevs_operational": 1, 00:12:31.293 "base_bdevs_list": [ 00:12:31.293 { 00:12:31.293 "name": null, 00:12:31.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.293 "is_configured": false, 00:12:31.293 "data_offset": 0, 00:12:31.293 "data_size": 65536 00:12:31.293 }, 00:12:31.293 { 00:12:31.293 "name": "BaseBdev2", 00:12:31.293 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:31.293 "is_configured": true, 00:12:31.293 "data_offset": 0, 00:12:31.293 "data_size": 65536 00:12:31.293 } 00:12:31.293 ] 00:12:31.293 }' 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.293 12:38:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.552 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:31.552 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:31.552 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:31.552 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:31.552 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:31.811 "name": "raid_bdev1", 00:12:31.811 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:31.811 "strip_size_kb": 0, 00:12:31.811 "state": "online", 00:12:31.811 "raid_level": "raid1", 00:12:31.811 "superblock": false, 00:12:31.811 "num_base_bdevs": 2, 00:12:31.811 "num_base_bdevs_discovered": 1, 00:12:31.811 "num_base_bdevs_operational": 1, 00:12:31.811 "base_bdevs_list": [ 00:12:31.811 { 00:12:31.811 "name": null, 00:12:31.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.811 "is_configured": false, 00:12:31.811 "data_offset": 0, 00:12:31.811 "data_size": 65536 00:12:31.811 }, 00:12:31.811 { 00:12:31.811 "name": "BaseBdev2", 00:12:31.811 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:31.811 "is_configured": true, 00:12:31.811 "data_offset": 0, 00:12:31.811 "data_size": 65536 00:12:31.811 } 00:12:31.811 ] 00:12:31.811 }' 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.811 [2024-12-14 12:38:31.424950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:31.811 [2024-12-14 12:38:31.442525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.811 12:38:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:31.811 [2024-12-14 12:38:31.444536] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:32.845 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:32.845 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:32.845 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:32.845 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:32.845 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:32.845 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.845 12:38:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.845 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.846 12:38:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.846 12:38:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.846 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:32.846 "name": "raid_bdev1", 00:12:32.846 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:32.846 "strip_size_kb": 0, 00:12:32.846 "state": "online", 00:12:32.846 "raid_level": "raid1", 00:12:32.846 "superblock": false, 00:12:32.846 "num_base_bdevs": 2, 00:12:32.846 "num_base_bdevs_discovered": 2, 00:12:32.846 "num_base_bdevs_operational": 2, 00:12:32.846 "process": { 00:12:32.846 "type": "rebuild", 00:12:32.846 "target": "spare", 00:12:32.846 "progress": { 00:12:32.846 "blocks": 20480, 00:12:32.846 "percent": 31 00:12:32.846 } 00:12:32.846 }, 00:12:32.846 "base_bdevs_list": [ 00:12:32.846 { 00:12:32.846 "name": "spare", 00:12:32.846 "uuid": "bfe5b407-1efa-5abc-b6c0-ba8ab48028e0", 00:12:32.846 "is_configured": true, 00:12:32.846 "data_offset": 0, 00:12:32.846 "data_size": 65536 00:12:32.846 }, 00:12:32.846 { 00:12:32.846 "name": "BaseBdev2", 00:12:32.846 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:32.846 "is_configured": true, 00:12:32.846 "data_offset": 0, 00:12:32.846 "data_size": 65536 00:12:32.846 } 00:12:32.846 ] 00:12:32.846 }' 00:12:32.846 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:32.846 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:32.846 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=367 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:33.105 "name": "raid_bdev1", 00:12:33.105 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:33.105 "strip_size_kb": 0, 00:12:33.105 "state": "online", 00:12:33.105 "raid_level": "raid1", 00:12:33.105 "superblock": false, 00:12:33.105 "num_base_bdevs": 2, 00:12:33.105 "num_base_bdevs_discovered": 2, 00:12:33.105 "num_base_bdevs_operational": 2, 00:12:33.105 "process": { 00:12:33.105 "type": "rebuild", 00:12:33.105 "target": "spare", 00:12:33.105 "progress": { 00:12:33.105 "blocks": 22528, 00:12:33.105 "percent": 34 00:12:33.105 } 00:12:33.105 }, 00:12:33.105 "base_bdevs_list": [ 00:12:33.105 { 00:12:33.105 "name": "spare", 00:12:33.105 "uuid": "bfe5b407-1efa-5abc-b6c0-ba8ab48028e0", 00:12:33.105 "is_configured": true, 00:12:33.105 "data_offset": 0, 00:12:33.105 "data_size": 65536 00:12:33.105 }, 00:12:33.105 { 00:12:33.105 "name": "BaseBdev2", 00:12:33.105 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:33.105 "is_configured": true, 00:12:33.105 "data_offset": 0, 00:12:33.105 "data_size": 65536 00:12:33.105 } 00:12:33.105 ] 00:12:33.105 }' 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:33.105 12:38:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:34.042 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:34.042 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.042 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.042 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.042 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.043 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.043 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.043 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.043 12:38:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.043 12:38:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.043 12:38:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.043 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.043 "name": "raid_bdev1", 00:12:34.043 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:34.043 "strip_size_kb": 0, 00:12:34.043 "state": "online", 00:12:34.043 "raid_level": "raid1", 00:12:34.043 "superblock": false, 00:12:34.043 "num_base_bdevs": 2, 00:12:34.043 "num_base_bdevs_discovered": 2, 00:12:34.043 "num_base_bdevs_operational": 2, 00:12:34.043 "process": { 00:12:34.043 "type": "rebuild", 00:12:34.043 "target": "spare", 00:12:34.043 "progress": { 00:12:34.043 "blocks": 45056, 00:12:34.043 "percent": 68 00:12:34.043 } 00:12:34.043 }, 00:12:34.043 "base_bdevs_list": [ 00:12:34.043 { 00:12:34.043 "name": "spare", 00:12:34.043 "uuid": "bfe5b407-1efa-5abc-b6c0-ba8ab48028e0", 00:12:34.043 "is_configured": true, 00:12:34.043 "data_offset": 0, 00:12:34.043 "data_size": 65536 00:12:34.043 }, 00:12:34.043 { 00:12:34.043 "name": "BaseBdev2", 00:12:34.043 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:34.043 "is_configured": true, 00:12:34.043 "data_offset": 0, 00:12:34.043 "data_size": 65536 00:12:34.043 } 00:12:34.043 ] 00:12:34.043 }' 00:12:34.043 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.302 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.302 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.302 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.302 12:38:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:35.241 [2024-12-14 12:38:34.658770] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:35.241 [2024-12-14 12:38:34.658950] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:35.241 [2024-12-14 12:38:34.659043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.241 12:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.242 12:38:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.242 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.242 "name": "raid_bdev1", 00:12:35.242 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:35.242 "strip_size_kb": 0, 00:12:35.242 "state": "online", 00:12:35.242 "raid_level": "raid1", 00:12:35.242 "superblock": false, 00:12:35.242 "num_base_bdevs": 2, 00:12:35.242 "num_base_bdevs_discovered": 2, 00:12:35.242 "num_base_bdevs_operational": 2, 00:12:35.242 "base_bdevs_list": [ 00:12:35.242 { 00:12:35.242 "name": "spare", 00:12:35.242 "uuid": "bfe5b407-1efa-5abc-b6c0-ba8ab48028e0", 00:12:35.242 "is_configured": true, 00:12:35.242 "data_offset": 0, 00:12:35.242 "data_size": 65536 00:12:35.242 }, 00:12:35.242 { 00:12:35.242 "name": "BaseBdev2", 00:12:35.242 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:35.242 "is_configured": true, 00:12:35.242 "data_offset": 0, 00:12:35.242 "data_size": 65536 00:12:35.242 } 00:12:35.242 ] 00:12:35.242 }' 00:12:35.242 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.242 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:35.242 12:38:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.505 "name": "raid_bdev1", 00:12:35.505 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:35.505 "strip_size_kb": 0, 00:12:35.505 "state": "online", 00:12:35.505 "raid_level": "raid1", 00:12:35.505 "superblock": false, 00:12:35.505 "num_base_bdevs": 2, 00:12:35.505 "num_base_bdevs_discovered": 2, 00:12:35.505 "num_base_bdevs_operational": 2, 00:12:35.505 "base_bdevs_list": [ 00:12:35.505 { 00:12:35.505 "name": "spare", 00:12:35.505 "uuid": "bfe5b407-1efa-5abc-b6c0-ba8ab48028e0", 00:12:35.505 "is_configured": true, 00:12:35.505 "data_offset": 0, 00:12:35.505 "data_size": 65536 00:12:35.505 }, 00:12:35.505 { 00:12:35.505 "name": "BaseBdev2", 00:12:35.505 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:35.505 "is_configured": true, 00:12:35.505 "data_offset": 0, 00:12:35.505 "data_size": 65536 00:12:35.505 } 00:12:35.505 ] 00:12:35.505 }' 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:35.505 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.506 "name": "raid_bdev1", 00:12:35.506 "uuid": "0e64b450-a8b3-4335-a60c-31c063567e3a", 00:12:35.506 "strip_size_kb": 0, 00:12:35.506 "state": "online", 00:12:35.506 "raid_level": "raid1", 00:12:35.506 "superblock": false, 00:12:35.506 "num_base_bdevs": 2, 00:12:35.506 "num_base_bdevs_discovered": 2, 00:12:35.506 "num_base_bdevs_operational": 2, 00:12:35.506 "base_bdevs_list": [ 00:12:35.506 { 00:12:35.506 "name": "spare", 00:12:35.506 "uuid": "bfe5b407-1efa-5abc-b6c0-ba8ab48028e0", 00:12:35.506 "is_configured": true, 00:12:35.506 "data_offset": 0, 00:12:35.506 "data_size": 65536 00:12:35.506 }, 00:12:35.506 { 00:12:35.506 "name": "BaseBdev2", 00:12:35.506 "uuid": "2b66959f-2805-5934-a267-3cf451bef0e4", 00:12:35.506 "is_configured": true, 00:12:35.506 "data_offset": 0, 00:12:35.506 "data_size": 65536 00:12:35.506 } 00:12:35.506 ] 00:12:35.506 }' 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.506 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.077 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:36.077 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.078 [2024-12-14 12:38:35.557509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:36.078 [2024-12-14 12:38:35.557606] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:36.078 [2024-12-14 12:38:35.557716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.078 [2024-12-14 12:38:35.557821] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.078 [2024-12-14 12:38:35.557907] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.078 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:36.337 /dev/nbd0 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.337 1+0 records in 00:12:36.337 1+0 records out 00:12:36.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000366288 s, 11.2 MB/s 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.337 12:38:35 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:36.337 /dev/nbd1 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:36.597 1+0 records in 00:12:36.597 1+0 records out 00:12:36.597 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416427 s, 9.8 MB/s 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.597 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:36.856 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77076 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77076 ']' 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77076 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77076 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77076' 00:12:37.116 killing process with pid 77076 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77076 00:12:37.116 Received shutdown signal, test time was about 60.000000 seconds 00:12:37.116 00:12:37.116 Latency(us) 00:12:37.116 [2024-12-14T12:38:36.854Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.116 [2024-12-14T12:38:36.854Z] =================================================================================================================== 00:12:37.116 [2024-12-14T12:38:36.854Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:37.116 [2024-12-14 12:38:36.751426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.116 12:38:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77076 00:12:37.375 [2024-12-14 12:38:37.054903] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:38.751 00:12:38.751 real 0m15.521s 00:12:38.751 user 0m17.552s 00:12:38.751 sys 0m2.942s 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.751 ************************************ 00:12:38.751 END TEST raid_rebuild_test 00:12:38.751 ************************************ 00:12:38.751 12:38:38 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:38.751 12:38:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:38.751 12:38:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:38.751 12:38:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:38.751 ************************************ 00:12:38.751 START TEST raid_rebuild_test_sb 00:12:38.751 ************************************ 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77504 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77504 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77504 ']' 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:38.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:38.751 12:38:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.751 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:38.751 Zero copy mechanism will not be used. 00:12:38.751 [2024-12-14 12:38:38.343706] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:38.751 [2024-12-14 12:38:38.343856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77504 ] 00:12:39.010 [2024-12-14 12:38:38.525812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.010 [2024-12-14 12:38:38.639613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.269 [2024-12-14 12:38:38.841215] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.269 [2024-12-14 12:38:38.841288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.528 BaseBdev1_malloc 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.528 [2024-12-14 12:38:39.226524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:39.528 [2024-12-14 12:38:39.226601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.528 [2024-12-14 12:38:39.226623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:39.528 [2024-12-14 12:38:39.226634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.528 [2024-12-14 12:38:39.228737] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.528 [2024-12-14 12:38:39.228775] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:39.528 BaseBdev1 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.528 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 BaseBdev2_malloc 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 [2024-12-14 12:38:39.281962] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:39.787 [2024-12-14 12:38:39.282018] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.787 [2024-12-14 12:38:39.282036] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:39.787 [2024-12-14 12:38:39.282056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.787 [2024-12-14 12:38:39.284131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.787 [2024-12-14 12:38:39.284165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:39.787 BaseBdev2 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 spare_malloc 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 spare_delay 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 [2024-12-14 12:38:39.363576] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:39.787 [2024-12-14 12:38:39.363636] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.787 [2024-12-14 12:38:39.363658] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:39.787 [2024-12-14 12:38:39.363669] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.787 [2024-12-14 12:38:39.366015] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.787 [2024-12-14 12:38:39.366135] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:39.787 spare 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.787 [2024-12-14 12:38:39.375633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.787 [2024-12-14 12:38:39.377606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.787 [2024-12-14 12:38:39.377908] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:39.787 [2024-12-14 12:38:39.377933] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.787 [2024-12-14 12:38:39.378266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:39.787 [2024-12-14 12:38:39.378459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:39.787 [2024-12-14 12:38:39.378470] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:39.787 [2024-12-14 12:38:39.378664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.787 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.788 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.788 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.788 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.788 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.788 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.788 "name": "raid_bdev1", 00:12:39.788 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:39.788 "strip_size_kb": 0, 00:12:39.788 "state": "online", 00:12:39.788 "raid_level": "raid1", 00:12:39.788 "superblock": true, 00:12:39.788 "num_base_bdevs": 2, 00:12:39.788 "num_base_bdevs_discovered": 2, 00:12:39.788 "num_base_bdevs_operational": 2, 00:12:39.788 "base_bdevs_list": [ 00:12:39.788 { 00:12:39.788 "name": "BaseBdev1", 00:12:39.788 "uuid": "aa82e716-0137-5afb-97d1-9688a595f37b", 00:12:39.788 "is_configured": true, 00:12:39.788 "data_offset": 2048, 00:12:39.788 "data_size": 63488 00:12:39.788 }, 00:12:39.788 { 00:12:39.788 "name": "BaseBdev2", 00:12:39.788 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:39.788 "is_configured": true, 00:12:39.788 "data_offset": 2048, 00:12:39.788 "data_size": 63488 00:12:39.788 } 00:12:39.788 ] 00:12:39.788 }' 00:12:39.788 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.788 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.355 [2024-12-14 12:38:39.831128] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:40.355 12:38:39 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:40.614 [2024-12-14 12:38:40.098442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:40.614 /dev/nbd0 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:40.614 1+0 records in 00:12:40.614 1+0 records out 00:12:40.614 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313981 s, 13.0 MB/s 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:40.614 12:38:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:44.805 63488+0 records in 00:12:44.805 63488+0 records out 00:12:44.805 32505856 bytes (33 MB, 31 MiB) copied, 3.89316 s, 8.3 MB/s 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.805 [2024-12-14 12:38:44.312129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.805 [2024-12-14 12:38:44.332214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.805 "name": "raid_bdev1", 00:12:44.805 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:44.805 "strip_size_kb": 0, 00:12:44.805 "state": "online", 00:12:44.805 "raid_level": "raid1", 00:12:44.805 "superblock": true, 00:12:44.805 "num_base_bdevs": 2, 00:12:44.805 "num_base_bdevs_discovered": 1, 00:12:44.805 "num_base_bdevs_operational": 1, 00:12:44.805 "base_bdevs_list": [ 00:12:44.805 { 00:12:44.805 "name": null, 00:12:44.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.805 "is_configured": false, 00:12:44.805 "data_offset": 0, 00:12:44.805 "data_size": 63488 00:12:44.805 }, 00:12:44.805 { 00:12:44.805 "name": "BaseBdev2", 00:12:44.805 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:44.805 "is_configured": true, 00:12:44.805 "data_offset": 2048, 00:12:44.805 "data_size": 63488 00:12:44.805 } 00:12:44.805 ] 00:12:44.805 }' 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.805 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.065 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:45.065 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.065 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.065 [2024-12-14 12:38:44.759540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.065 [2024-12-14 12:38:44.776007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:45.065 12:38:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.065 12:38:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:45.065 [2024-12-14 12:38:44.777932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.467 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.467 "name": "raid_bdev1", 00:12:46.467 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:46.467 "strip_size_kb": 0, 00:12:46.467 "state": "online", 00:12:46.467 "raid_level": "raid1", 00:12:46.467 "superblock": true, 00:12:46.467 "num_base_bdevs": 2, 00:12:46.467 "num_base_bdevs_discovered": 2, 00:12:46.467 "num_base_bdevs_operational": 2, 00:12:46.467 "process": { 00:12:46.467 "type": "rebuild", 00:12:46.467 "target": "spare", 00:12:46.467 "progress": { 00:12:46.467 "blocks": 20480, 00:12:46.467 "percent": 32 00:12:46.467 } 00:12:46.467 }, 00:12:46.467 "base_bdevs_list": [ 00:12:46.467 { 00:12:46.467 "name": "spare", 00:12:46.468 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:46.468 "is_configured": true, 00:12:46.468 "data_offset": 2048, 00:12:46.468 "data_size": 63488 00:12:46.468 }, 00:12:46.468 { 00:12:46.468 "name": "BaseBdev2", 00:12:46.468 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:46.468 "is_configured": true, 00:12:46.468 "data_offset": 2048, 00:12:46.468 "data_size": 63488 00:12:46.468 } 00:12:46.468 ] 00:12:46.468 }' 00:12:46.468 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.468 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.468 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.468 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.468 12:38:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:46.468 12:38:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.468 12:38:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.468 [2024-12-14 12:38:45.917587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.468 [2024-12-14 12:38:45.983934] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.468 [2024-12-14 12:38:45.984018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.468 [2024-12-14 12:38:45.984034] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.468 [2024-12-14 12:38:45.984058] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.468 "name": "raid_bdev1", 00:12:46.468 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:46.468 "strip_size_kb": 0, 00:12:46.468 "state": "online", 00:12:46.468 "raid_level": "raid1", 00:12:46.468 "superblock": true, 00:12:46.468 "num_base_bdevs": 2, 00:12:46.468 "num_base_bdevs_discovered": 1, 00:12:46.468 "num_base_bdevs_operational": 1, 00:12:46.468 "base_bdevs_list": [ 00:12:46.468 { 00:12:46.468 "name": null, 00:12:46.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.468 "is_configured": false, 00:12:46.468 "data_offset": 0, 00:12:46.468 "data_size": 63488 00:12:46.468 }, 00:12:46.468 { 00:12:46.468 "name": "BaseBdev2", 00:12:46.468 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:46.468 "is_configured": true, 00:12:46.468 "data_offset": 2048, 00:12:46.468 "data_size": 63488 00:12:46.468 } 00:12:46.468 ] 00:12:46.468 }' 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.468 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.726 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.986 "name": "raid_bdev1", 00:12:46.986 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:46.986 "strip_size_kb": 0, 00:12:46.986 "state": "online", 00:12:46.986 "raid_level": "raid1", 00:12:46.986 "superblock": true, 00:12:46.986 "num_base_bdevs": 2, 00:12:46.986 "num_base_bdevs_discovered": 1, 00:12:46.986 "num_base_bdevs_operational": 1, 00:12:46.986 "base_bdevs_list": [ 00:12:46.986 { 00:12:46.986 "name": null, 00:12:46.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.986 "is_configured": false, 00:12:46.986 "data_offset": 0, 00:12:46.986 "data_size": 63488 00:12:46.986 }, 00:12:46.986 { 00:12:46.986 "name": "BaseBdev2", 00:12:46.986 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:46.986 "is_configured": true, 00:12:46.986 "data_offset": 2048, 00:12:46.986 "data_size": 63488 00:12:46.986 } 00:12:46.986 ] 00:12:46.986 }' 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.986 [2024-12-14 12:38:46.583373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:46.986 [2024-12-14 12:38:46.599092] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.986 12:38:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:46.986 [2024-12-14 12:38:46.600897] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:47.922 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.923 "name": "raid_bdev1", 00:12:47.923 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:47.923 "strip_size_kb": 0, 00:12:47.923 "state": "online", 00:12:47.923 "raid_level": "raid1", 00:12:47.923 "superblock": true, 00:12:47.923 "num_base_bdevs": 2, 00:12:47.923 "num_base_bdevs_discovered": 2, 00:12:47.923 "num_base_bdevs_operational": 2, 00:12:47.923 "process": { 00:12:47.923 "type": "rebuild", 00:12:47.923 "target": "spare", 00:12:47.923 "progress": { 00:12:47.923 "blocks": 20480, 00:12:47.923 "percent": 32 00:12:47.923 } 00:12:47.923 }, 00:12:47.923 "base_bdevs_list": [ 00:12:47.923 { 00:12:47.923 "name": "spare", 00:12:47.923 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:47.923 "is_configured": true, 00:12:47.923 "data_offset": 2048, 00:12:47.923 "data_size": 63488 00:12:47.923 }, 00:12:47.923 { 00:12:47.923 "name": "BaseBdev2", 00:12:47.923 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:47.923 "is_configured": true, 00:12:47.923 "data_offset": 2048, 00:12:47.923 "data_size": 63488 00:12:47.923 } 00:12:47.923 ] 00:12:47.923 }' 00:12:47.923 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:48.182 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=382 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.182 "name": "raid_bdev1", 00:12:48.182 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:48.182 "strip_size_kb": 0, 00:12:48.182 "state": "online", 00:12:48.182 "raid_level": "raid1", 00:12:48.182 "superblock": true, 00:12:48.182 "num_base_bdevs": 2, 00:12:48.182 "num_base_bdevs_discovered": 2, 00:12:48.182 "num_base_bdevs_operational": 2, 00:12:48.182 "process": { 00:12:48.182 "type": "rebuild", 00:12:48.182 "target": "spare", 00:12:48.182 "progress": { 00:12:48.182 "blocks": 22528, 00:12:48.182 "percent": 35 00:12:48.182 } 00:12:48.182 }, 00:12:48.182 "base_bdevs_list": [ 00:12:48.182 { 00:12:48.182 "name": "spare", 00:12:48.182 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:48.182 "is_configured": true, 00:12:48.182 "data_offset": 2048, 00:12:48.182 "data_size": 63488 00:12:48.182 }, 00:12:48.182 { 00:12:48.182 "name": "BaseBdev2", 00:12:48.182 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:48.182 "is_configured": true, 00:12:48.182 "data_offset": 2048, 00:12:48.182 "data_size": 63488 00:12:48.182 } 00:12:48.182 ] 00:12:48.182 }' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:48.182 12:38:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:49.562 "name": "raid_bdev1", 00:12:49.562 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:49.562 "strip_size_kb": 0, 00:12:49.562 "state": "online", 00:12:49.562 "raid_level": "raid1", 00:12:49.562 "superblock": true, 00:12:49.562 "num_base_bdevs": 2, 00:12:49.562 "num_base_bdevs_discovered": 2, 00:12:49.562 "num_base_bdevs_operational": 2, 00:12:49.562 "process": { 00:12:49.562 "type": "rebuild", 00:12:49.562 "target": "spare", 00:12:49.562 "progress": { 00:12:49.562 "blocks": 45056, 00:12:49.562 "percent": 70 00:12:49.562 } 00:12:49.562 }, 00:12:49.562 "base_bdevs_list": [ 00:12:49.562 { 00:12:49.562 "name": "spare", 00:12:49.562 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:49.562 "is_configured": true, 00:12:49.562 "data_offset": 2048, 00:12:49.562 "data_size": 63488 00:12:49.562 }, 00:12:49.562 { 00:12:49.562 "name": "BaseBdev2", 00:12:49.562 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:49.562 "is_configured": true, 00:12:49.562 "data_offset": 2048, 00:12:49.562 "data_size": 63488 00:12:49.562 } 00:12:49.562 ] 00:12:49.562 }' 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:49.562 12:38:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:49.562 12:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:49.562 12:38:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:50.130 [2024-12-14 12:38:49.715334] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:50.130 [2024-12-14 12:38:49.715421] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:50.130 [2024-12-14 12:38:49.715547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.390 "name": "raid_bdev1", 00:12:50.390 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:50.390 "strip_size_kb": 0, 00:12:50.390 "state": "online", 00:12:50.390 "raid_level": "raid1", 00:12:50.390 "superblock": true, 00:12:50.390 "num_base_bdevs": 2, 00:12:50.390 "num_base_bdevs_discovered": 2, 00:12:50.390 "num_base_bdevs_operational": 2, 00:12:50.390 "base_bdevs_list": [ 00:12:50.390 { 00:12:50.390 "name": "spare", 00:12:50.390 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:50.390 "is_configured": true, 00:12:50.390 "data_offset": 2048, 00:12:50.390 "data_size": 63488 00:12:50.390 }, 00:12:50.390 { 00:12:50.390 "name": "BaseBdev2", 00:12:50.390 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:50.390 "is_configured": true, 00:12:50.390 "data_offset": 2048, 00:12:50.390 "data_size": 63488 00:12:50.390 } 00:12:50.390 ] 00:12:50.390 }' 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:50.390 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.650 "name": "raid_bdev1", 00:12:50.650 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:50.650 "strip_size_kb": 0, 00:12:50.650 "state": "online", 00:12:50.650 "raid_level": "raid1", 00:12:50.650 "superblock": true, 00:12:50.650 "num_base_bdevs": 2, 00:12:50.650 "num_base_bdevs_discovered": 2, 00:12:50.650 "num_base_bdevs_operational": 2, 00:12:50.650 "base_bdevs_list": [ 00:12:50.650 { 00:12:50.650 "name": "spare", 00:12:50.650 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:50.650 "is_configured": true, 00:12:50.650 "data_offset": 2048, 00:12:50.650 "data_size": 63488 00:12:50.650 }, 00:12:50.650 { 00:12:50.650 "name": "BaseBdev2", 00:12:50.650 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:50.650 "is_configured": true, 00:12:50.650 "data_offset": 2048, 00:12:50.650 "data_size": 63488 00:12:50.650 } 00:12:50.650 ] 00:12:50.650 }' 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.650 "name": "raid_bdev1", 00:12:50.650 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:50.650 "strip_size_kb": 0, 00:12:50.650 "state": "online", 00:12:50.650 "raid_level": "raid1", 00:12:50.650 "superblock": true, 00:12:50.650 "num_base_bdevs": 2, 00:12:50.650 "num_base_bdevs_discovered": 2, 00:12:50.650 "num_base_bdevs_operational": 2, 00:12:50.650 "base_bdevs_list": [ 00:12:50.650 { 00:12:50.650 "name": "spare", 00:12:50.650 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:50.650 "is_configured": true, 00:12:50.650 "data_offset": 2048, 00:12:50.650 "data_size": 63488 00:12:50.650 }, 00:12:50.650 { 00:12:50.650 "name": "BaseBdev2", 00:12:50.650 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:50.650 "is_configured": true, 00:12:50.650 "data_offset": 2048, 00:12:50.650 "data_size": 63488 00:12:50.650 } 00:12:50.650 ] 00:12:50.650 }' 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.650 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.218 [2024-12-14 12:38:50.777509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.218 [2024-12-14 12:38:50.777545] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:51.218 [2024-12-14 12:38:50.777632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:51.218 [2024-12-14 12:38:50.777710] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:51.218 [2024-12-14 12:38:50.777727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.218 12:38:50 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:51.478 /dev/nbd0 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.478 1+0 records in 00:12:51.478 1+0 records out 00:12:51.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426616 s, 9.6 MB/s 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.478 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:51.737 /dev/nbd1 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:51.737 1+0 records in 00:12:51.737 1+0 records out 00:12:51.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407199 s, 10.1 MB/s 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:51.737 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:51.738 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:51.738 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:51.738 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:51.738 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:51.997 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.257 [2024-12-14 12:38:51.972726] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:52.257 [2024-12-14 12:38:51.972782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.257 [2024-12-14 12:38:51.972805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:52.257 [2024-12-14 12:38:51.972814] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.257 [2024-12-14 12:38:51.975038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.257 [2024-12-14 12:38:51.975081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:52.257 [2024-12-14 12:38:51.975177] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:52.257 [2024-12-14 12:38:51.975222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.257 [2024-12-14 12:38:51.975401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.257 spare 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.257 12:38:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.517 [2024-12-14 12:38:52.075325] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:52.517 [2024-12-14 12:38:52.075391] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.517 [2024-12-14 12:38:52.075754] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:52.517 [2024-12-14 12:38:52.075973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:52.517 [2024-12-14 12:38:52.075995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:52.517 [2024-12-14 12:38:52.076187] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.517 "name": "raid_bdev1", 00:12:52.517 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:52.517 "strip_size_kb": 0, 00:12:52.517 "state": "online", 00:12:52.517 "raid_level": "raid1", 00:12:52.517 "superblock": true, 00:12:52.517 "num_base_bdevs": 2, 00:12:52.517 "num_base_bdevs_discovered": 2, 00:12:52.517 "num_base_bdevs_operational": 2, 00:12:52.517 "base_bdevs_list": [ 00:12:52.517 { 00:12:52.517 "name": "spare", 00:12:52.517 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:52.517 "is_configured": true, 00:12:52.517 "data_offset": 2048, 00:12:52.517 "data_size": 63488 00:12:52.517 }, 00:12:52.517 { 00:12:52.517 "name": "BaseBdev2", 00:12:52.517 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:52.517 "is_configured": true, 00:12:52.517 "data_offset": 2048, 00:12:52.517 "data_size": 63488 00:12:52.517 } 00:12:52.517 ] 00:12:52.517 }' 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.517 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.125 "name": "raid_bdev1", 00:12:53.125 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:53.125 "strip_size_kb": 0, 00:12:53.125 "state": "online", 00:12:53.125 "raid_level": "raid1", 00:12:53.125 "superblock": true, 00:12:53.125 "num_base_bdevs": 2, 00:12:53.125 "num_base_bdevs_discovered": 2, 00:12:53.125 "num_base_bdevs_operational": 2, 00:12:53.125 "base_bdevs_list": [ 00:12:53.125 { 00:12:53.125 "name": "spare", 00:12:53.125 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:53.125 "is_configured": true, 00:12:53.125 "data_offset": 2048, 00:12:53.125 "data_size": 63488 00:12:53.125 }, 00:12:53.125 { 00:12:53.125 "name": "BaseBdev2", 00:12:53.125 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:53.125 "is_configured": true, 00:12:53.125 "data_offset": 2048, 00:12:53.125 "data_size": 63488 00:12:53.125 } 00:12:53.125 ] 00:12:53.125 }' 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.125 [2024-12-14 12:38:52.763446] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.125 "name": "raid_bdev1", 00:12:53.125 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:53.125 "strip_size_kb": 0, 00:12:53.125 "state": "online", 00:12:53.125 "raid_level": "raid1", 00:12:53.125 "superblock": true, 00:12:53.125 "num_base_bdevs": 2, 00:12:53.125 "num_base_bdevs_discovered": 1, 00:12:53.125 "num_base_bdevs_operational": 1, 00:12:53.125 "base_bdevs_list": [ 00:12:53.125 { 00:12:53.125 "name": null, 00:12:53.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.125 "is_configured": false, 00:12:53.125 "data_offset": 0, 00:12:53.125 "data_size": 63488 00:12:53.125 }, 00:12:53.125 { 00:12:53.125 "name": "BaseBdev2", 00:12:53.125 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:53.125 "is_configured": true, 00:12:53.125 "data_offset": 2048, 00:12:53.125 "data_size": 63488 00:12:53.125 } 00:12:53.125 ] 00:12:53.125 }' 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.125 12:38:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.701 12:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:53.701 12:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.702 12:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.702 [2024-12-14 12:38:53.222702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.702 [2024-12-14 12:38:53.222925] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:53.702 [2024-12-14 12:38:53.222951] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:53.702 [2024-12-14 12:38:53.222988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:53.702 [2024-12-14 12:38:53.237769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:53.702 12:38:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.702 12:38:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:53.702 [2024-12-14 12:38:53.239624] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.639 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.639 "name": "raid_bdev1", 00:12:54.639 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:54.639 "strip_size_kb": 0, 00:12:54.639 "state": "online", 00:12:54.639 "raid_level": "raid1", 00:12:54.639 "superblock": true, 00:12:54.639 "num_base_bdevs": 2, 00:12:54.639 "num_base_bdevs_discovered": 2, 00:12:54.639 "num_base_bdevs_operational": 2, 00:12:54.639 "process": { 00:12:54.639 "type": "rebuild", 00:12:54.639 "target": "spare", 00:12:54.639 "progress": { 00:12:54.639 "blocks": 20480, 00:12:54.639 "percent": 32 00:12:54.639 } 00:12:54.639 }, 00:12:54.639 "base_bdevs_list": [ 00:12:54.639 { 00:12:54.639 "name": "spare", 00:12:54.639 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:54.639 "is_configured": true, 00:12:54.639 "data_offset": 2048, 00:12:54.639 "data_size": 63488 00:12:54.639 }, 00:12:54.639 { 00:12:54.639 "name": "BaseBdev2", 00:12:54.639 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:54.639 "is_configured": true, 00:12:54.639 "data_offset": 2048, 00:12:54.639 "data_size": 63488 00:12:54.639 } 00:12:54.639 ] 00:12:54.639 }' 00:12:54.640 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.640 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.640 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.899 [2024-12-14 12:38:54.411638] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.899 [2024-12-14 12:38:54.445280] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:54.899 [2024-12-14 12:38:54.445369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.899 [2024-12-14 12:38:54.445384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:54.899 [2024-12-14 12:38:54.445393] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.899 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.900 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.900 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.900 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.900 "name": "raid_bdev1", 00:12:54.900 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:54.900 "strip_size_kb": 0, 00:12:54.900 "state": "online", 00:12:54.900 "raid_level": "raid1", 00:12:54.900 "superblock": true, 00:12:54.900 "num_base_bdevs": 2, 00:12:54.900 "num_base_bdevs_discovered": 1, 00:12:54.900 "num_base_bdevs_operational": 1, 00:12:54.900 "base_bdevs_list": [ 00:12:54.900 { 00:12:54.900 "name": null, 00:12:54.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.900 "is_configured": false, 00:12:54.900 "data_offset": 0, 00:12:54.900 "data_size": 63488 00:12:54.900 }, 00:12:54.900 { 00:12:54.900 "name": "BaseBdev2", 00:12:54.900 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:54.900 "is_configured": true, 00:12:54.900 "data_offset": 2048, 00:12:54.900 "data_size": 63488 00:12:54.900 } 00:12:54.900 ] 00:12:54.900 }' 00:12:54.900 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.900 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.469 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:55.469 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.469 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.469 [2024-12-14 12:38:54.937874] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:55.469 [2024-12-14 12:38:54.937945] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.469 [2024-12-14 12:38:54.937967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:55.469 [2024-12-14 12:38:54.937978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.469 [2024-12-14 12:38:54.938526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.469 [2024-12-14 12:38:54.938559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:55.469 [2024-12-14 12:38:54.938668] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:55.469 [2024-12-14 12:38:54.938692] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:55.469 [2024-12-14 12:38:54.938704] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:55.469 [2024-12-14 12:38:54.938734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:55.469 [2024-12-14 12:38:54.954446] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:55.469 spare 00:12:55.469 12:38:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.469 12:38:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:55.469 [2024-12-14 12:38:54.956301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:56.405 12:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.405 12:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.405 12:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.405 12:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.406 12:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.406 12:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.406 12:38:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.406 12:38:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.406 12:38:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.406 12:38:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.406 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.406 "name": "raid_bdev1", 00:12:56.406 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:56.406 "strip_size_kb": 0, 00:12:56.406 "state": "online", 00:12:56.406 "raid_level": "raid1", 00:12:56.406 "superblock": true, 00:12:56.406 "num_base_bdevs": 2, 00:12:56.406 "num_base_bdevs_discovered": 2, 00:12:56.406 "num_base_bdevs_operational": 2, 00:12:56.406 "process": { 00:12:56.406 "type": "rebuild", 00:12:56.406 "target": "spare", 00:12:56.406 "progress": { 00:12:56.406 "blocks": 20480, 00:12:56.406 "percent": 32 00:12:56.406 } 00:12:56.406 }, 00:12:56.406 "base_bdevs_list": [ 00:12:56.406 { 00:12:56.406 "name": "spare", 00:12:56.406 "uuid": "ba0f5d2c-8a11-5fe1-8bb8-c98eb6cb487e", 00:12:56.406 "is_configured": true, 00:12:56.406 "data_offset": 2048, 00:12:56.406 "data_size": 63488 00:12:56.406 }, 00:12:56.406 { 00:12:56.406 "name": "BaseBdev2", 00:12:56.406 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:56.406 "is_configured": true, 00:12:56.406 "data_offset": 2048, 00:12:56.406 "data_size": 63488 00:12:56.406 } 00:12:56.406 ] 00:12:56.406 }' 00:12:56.406 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.406 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:56.406 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.406 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:56.406 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:56.406 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.406 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.406 [2024-12-14 12:38:56.096273] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.664 [2024-12-14 12:38:56.161860] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:56.664 [2024-12-14 12:38:56.161932] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.664 [2024-12-14 12:38:56.161949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:56.664 [2024-12-14 12:38:56.161957] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.664 "name": "raid_bdev1", 00:12:56.664 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:56.664 "strip_size_kb": 0, 00:12:56.664 "state": "online", 00:12:56.664 "raid_level": "raid1", 00:12:56.664 "superblock": true, 00:12:56.664 "num_base_bdevs": 2, 00:12:56.664 "num_base_bdevs_discovered": 1, 00:12:56.664 "num_base_bdevs_operational": 1, 00:12:56.664 "base_bdevs_list": [ 00:12:56.664 { 00:12:56.664 "name": null, 00:12:56.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.664 "is_configured": false, 00:12:56.664 "data_offset": 0, 00:12:56.664 "data_size": 63488 00:12:56.664 }, 00:12:56.664 { 00:12:56.664 "name": "BaseBdev2", 00:12:56.664 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:56.664 "is_configured": true, 00:12:56.664 "data_offset": 2048, 00:12:56.664 "data_size": 63488 00:12:56.664 } 00:12:56.664 ] 00:12:56.664 }' 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.664 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:57.232 "name": "raid_bdev1", 00:12:57.232 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:57.232 "strip_size_kb": 0, 00:12:57.232 "state": "online", 00:12:57.232 "raid_level": "raid1", 00:12:57.232 "superblock": true, 00:12:57.232 "num_base_bdevs": 2, 00:12:57.232 "num_base_bdevs_discovered": 1, 00:12:57.232 "num_base_bdevs_operational": 1, 00:12:57.232 "base_bdevs_list": [ 00:12:57.232 { 00:12:57.232 "name": null, 00:12:57.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.232 "is_configured": false, 00:12:57.232 "data_offset": 0, 00:12:57.232 "data_size": 63488 00:12:57.232 }, 00:12:57.232 { 00:12:57.232 "name": "BaseBdev2", 00:12:57.232 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:57.232 "is_configured": true, 00:12:57.232 "data_offset": 2048, 00:12:57.232 "data_size": 63488 00:12:57.232 } 00:12:57.232 ] 00:12:57.232 }' 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.232 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.232 [2024-12-14 12:38:56.843153] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:57.232 [2024-12-14 12:38:56.843213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:57.232 [2024-12-14 12:38:56.843237] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:57.232 [2024-12-14 12:38:56.843255] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:57.232 [2024-12-14 12:38:56.843719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:57.232 [2024-12-14 12:38:56.843735] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:57.233 [2024-12-14 12:38:56.843819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:57.233 [2024-12-14 12:38:56.843833] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:57.233 [2024-12-14 12:38:56.843844] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:57.233 [2024-12-14 12:38:56.843854] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:57.233 BaseBdev1 00:12:57.233 12:38:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.233 12:38:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.170 "name": "raid_bdev1", 00:12:58.170 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:58.170 "strip_size_kb": 0, 00:12:58.170 "state": "online", 00:12:58.170 "raid_level": "raid1", 00:12:58.170 "superblock": true, 00:12:58.170 "num_base_bdevs": 2, 00:12:58.170 "num_base_bdevs_discovered": 1, 00:12:58.170 "num_base_bdevs_operational": 1, 00:12:58.170 "base_bdevs_list": [ 00:12:58.170 { 00:12:58.170 "name": null, 00:12:58.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.170 "is_configured": false, 00:12:58.170 "data_offset": 0, 00:12:58.170 "data_size": 63488 00:12:58.170 }, 00:12:58.170 { 00:12:58.170 "name": "BaseBdev2", 00:12:58.170 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:58.170 "is_configured": true, 00:12:58.170 "data_offset": 2048, 00:12:58.170 "data_size": 63488 00:12:58.170 } 00:12:58.170 ] 00:12:58.170 }' 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.170 12:38:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.739 "name": "raid_bdev1", 00:12:58.739 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:58.739 "strip_size_kb": 0, 00:12:58.739 "state": "online", 00:12:58.739 "raid_level": "raid1", 00:12:58.739 "superblock": true, 00:12:58.739 "num_base_bdevs": 2, 00:12:58.739 "num_base_bdevs_discovered": 1, 00:12:58.739 "num_base_bdevs_operational": 1, 00:12:58.739 "base_bdevs_list": [ 00:12:58.739 { 00:12:58.739 "name": null, 00:12:58.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.739 "is_configured": false, 00:12:58.739 "data_offset": 0, 00:12:58.739 "data_size": 63488 00:12:58.739 }, 00:12:58.739 { 00:12:58.739 "name": "BaseBdev2", 00:12:58.739 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:58.739 "is_configured": true, 00:12:58.739 "data_offset": 2048, 00:12:58.739 "data_size": 63488 00:12:58.739 } 00:12:58.739 ] 00:12:58.739 }' 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.739 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.998 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.998 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.999 [2024-12-14 12:38:58.496405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.999 [2024-12-14 12:38:58.496646] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:58.999 [2024-12-14 12:38:58.496713] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:58.999 request: 00:12:58.999 { 00:12:58.999 "base_bdev": "BaseBdev1", 00:12:58.999 "raid_bdev": "raid_bdev1", 00:12:58.999 "method": "bdev_raid_add_base_bdev", 00:12:58.999 "req_id": 1 00:12:58.999 } 00:12:58.999 Got JSON-RPC error response 00:12:58.999 response: 00:12:58.999 { 00:12:58.999 "code": -22, 00:12:58.999 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:58.999 } 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:58.999 12:38:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.936 "name": "raid_bdev1", 00:12:59.936 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:12:59.936 "strip_size_kb": 0, 00:12:59.936 "state": "online", 00:12:59.936 "raid_level": "raid1", 00:12:59.936 "superblock": true, 00:12:59.936 "num_base_bdevs": 2, 00:12:59.936 "num_base_bdevs_discovered": 1, 00:12:59.936 "num_base_bdevs_operational": 1, 00:12:59.936 "base_bdevs_list": [ 00:12:59.936 { 00:12:59.936 "name": null, 00:12:59.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.936 "is_configured": false, 00:12:59.936 "data_offset": 0, 00:12:59.936 "data_size": 63488 00:12:59.936 }, 00:12:59.936 { 00:12:59.936 "name": "BaseBdev2", 00:12:59.936 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:12:59.936 "is_configured": true, 00:12:59.936 "data_offset": 2048, 00:12:59.936 "data_size": 63488 00:12:59.936 } 00:12:59.936 ] 00:12:59.936 }' 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.936 12:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.503 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.504 "name": "raid_bdev1", 00:13:00.504 "uuid": "36a504ea-1cc5-42f5-b2c8-84f27846b43c", 00:13:00.504 "strip_size_kb": 0, 00:13:00.504 "state": "online", 00:13:00.504 "raid_level": "raid1", 00:13:00.504 "superblock": true, 00:13:00.504 "num_base_bdevs": 2, 00:13:00.504 "num_base_bdevs_discovered": 1, 00:13:00.504 "num_base_bdevs_operational": 1, 00:13:00.504 "base_bdevs_list": [ 00:13:00.504 { 00:13:00.504 "name": null, 00:13:00.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.504 "is_configured": false, 00:13:00.504 "data_offset": 0, 00:13:00.504 "data_size": 63488 00:13:00.504 }, 00:13:00.504 { 00:13:00.504 "name": "BaseBdev2", 00:13:00.504 "uuid": "429ff2ee-229e-539d-9398-d59f12c758f1", 00:13:00.504 "is_configured": true, 00:13:00.504 "data_offset": 2048, 00:13:00.504 "data_size": 63488 00:13:00.504 } 00:13:00.504 ] 00:13:00.504 }' 00:13:00.504 12:38:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77504 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77504 ']' 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77504 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77504 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.504 killing process with pid 77504 00:13:00.504 Received shutdown signal, test time was about 60.000000 seconds 00:13:00.504 00:13:00.504 Latency(us) 00:13:00.504 [2024-12-14T12:39:00.242Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:00.504 [2024-12-14T12:39:00.242Z] =================================================================================================================== 00:13:00.504 [2024-12-14T12:39:00.242Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77504' 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77504 00:13:00.504 [2024-12-14 12:39:00.099884] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:00.504 [2024-12-14 12:39:00.100020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:00.504 12:39:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77504 00:13:00.504 [2024-12-14 12:39:00.100091] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:00.504 [2024-12-14 12:39:00.100106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:00.762 [2024-12-14 12:39:00.390048] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.139 12:39:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:02.139 00:13:02.139 real 0m23.269s 00:13:02.139 user 0m28.754s 00:13:02.139 sys 0m3.579s 00:13:02.139 12:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.139 12:39:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.139 ************************************ 00:13:02.139 END TEST raid_rebuild_test_sb 00:13:02.139 ************************************ 00:13:02.139 12:39:01 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:02.139 12:39:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:02.139 12:39:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.139 12:39:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:02.139 ************************************ 00:13:02.139 START TEST raid_rebuild_test_io 00:13:02.139 ************************************ 00:13:02.139 12:39:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:02.139 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:02.139 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:02.139 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78228 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78228 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78228 ']' 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.140 12:39:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.140 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:02.140 Zero copy mechanism will not be used. 00:13:02.140 [2024-12-14 12:39:01.669160] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:02.140 [2024-12-14 12:39:01.669279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78228 ] 00:13:02.140 [2024-12-14 12:39:01.841294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:02.398 [2024-12-14 12:39:01.952173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.656 [2024-12-14 12:39:02.145941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.656 [2024-12-14 12:39:02.145994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.915 BaseBdev1_malloc 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.915 [2024-12-14 12:39:02.550175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:02.915 [2024-12-14 12:39:02.550290] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.915 [2024-12-14 12:39:02.550336] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:02.915 [2024-12-14 12:39:02.550366] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.915 [2024-12-14 12:39:02.552393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.915 [2024-12-14 12:39:02.552469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.915 BaseBdev1 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.915 BaseBdev2_malloc 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:02.915 [2024-12-14 12:39:02.603526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:02.915 [2024-12-14 12:39:02.603610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.915 [2024-12-14 12:39:02.603647] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:02.915 [2024-12-14 12:39:02.603660] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.915 [2024-12-14 12:39:02.605893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.915 [2024-12-14 12:39:02.605938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:02.915 BaseBdev2 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.915 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.210 spare_malloc 00:13:03.210 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.210 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:03.210 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.210 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.210 spare_delay 00:13:03.210 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.210 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.211 [2024-12-14 12:39:02.681568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:03.211 [2024-12-14 12:39:02.681627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:03.211 [2024-12-14 12:39:02.681648] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:03.211 [2024-12-14 12:39:02.681658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:03.211 [2024-12-14 12:39:02.683917] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:03.211 [2024-12-14 12:39:02.684001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:03.211 spare 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.211 [2024-12-14 12:39:02.693603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:03.211 [2024-12-14 12:39:02.695515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.211 [2024-12-14 12:39:02.695615] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:03.211 [2024-12-14 12:39:02.695630] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:03.211 [2024-12-14 12:39:02.695905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:03.211 [2024-12-14 12:39:02.696073] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:03.211 [2024-12-14 12:39:02.696085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:03.211 [2024-12-14 12:39:02.696237] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.211 "name": "raid_bdev1", 00:13:03.211 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:03.211 "strip_size_kb": 0, 00:13:03.211 "state": "online", 00:13:03.211 "raid_level": "raid1", 00:13:03.211 "superblock": false, 00:13:03.211 "num_base_bdevs": 2, 00:13:03.211 "num_base_bdevs_discovered": 2, 00:13:03.211 "num_base_bdevs_operational": 2, 00:13:03.211 "base_bdevs_list": [ 00:13:03.211 { 00:13:03.211 "name": "BaseBdev1", 00:13:03.211 "uuid": "d5165a1f-8cec-5107-8ec9-429614593d54", 00:13:03.211 "is_configured": true, 00:13:03.211 "data_offset": 0, 00:13:03.211 "data_size": 65536 00:13:03.211 }, 00:13:03.211 { 00:13:03.211 "name": "BaseBdev2", 00:13:03.211 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:03.211 "is_configured": true, 00:13:03.211 "data_offset": 0, 00:13:03.211 "data_size": 65536 00:13:03.211 } 00:13:03.211 ] 00:13:03.211 }' 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.211 12:39:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.473 [2024-12-14 12:39:03.133161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:03.473 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.732 [2024-12-14 12:39:03.232663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.732 "name": "raid_bdev1", 00:13:03.732 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:03.732 "strip_size_kb": 0, 00:13:03.732 "state": "online", 00:13:03.732 "raid_level": "raid1", 00:13:03.732 "superblock": false, 00:13:03.732 "num_base_bdevs": 2, 00:13:03.732 "num_base_bdevs_discovered": 1, 00:13:03.732 "num_base_bdevs_operational": 1, 00:13:03.732 "base_bdevs_list": [ 00:13:03.732 { 00:13:03.732 "name": null, 00:13:03.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.732 "is_configured": false, 00:13:03.732 "data_offset": 0, 00:13:03.732 "data_size": 65536 00:13:03.732 }, 00:13:03.732 { 00:13:03.732 "name": "BaseBdev2", 00:13:03.732 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:03.732 "is_configured": true, 00:13:03.732 "data_offset": 0, 00:13:03.732 "data_size": 65536 00:13:03.732 } 00:13:03.732 ] 00:13:03.732 }' 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.732 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.732 [2024-12-14 12:39:03.328595] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:03.732 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:03.732 Zero copy mechanism will not be used. 00:13:03.732 Running I/O for 60 seconds... 00:13:03.991 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:03.991 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.991 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:03.991 [2024-12-14 12:39:03.697129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:04.250 12:39:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.250 12:39:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:04.250 [2024-12-14 12:39:03.750061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:04.250 [2024-12-14 12:39:03.752122] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:04.250 [2024-12-14 12:39:03.866111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.250 [2024-12-14 12:39:03.866769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:04.509 [2024-12-14 12:39:04.075613] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:04.509 [2024-12-14 12:39:04.076051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:04.768 [2024-12-14 12:39:04.318271] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:04.768 222.00 IOPS, 666.00 MiB/s [2024-12-14T12:39:04.506Z] [2024-12-14 12:39:04.432933] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:04.768 [2024-12-14 12:39:04.433303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.026 12:39:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.026 [2024-12-14 12:39:04.753538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.285 "name": "raid_bdev1", 00:13:05.285 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:05.285 "strip_size_kb": 0, 00:13:05.285 "state": "online", 00:13:05.285 "raid_level": "raid1", 00:13:05.285 "superblock": false, 00:13:05.285 "num_base_bdevs": 2, 00:13:05.285 "num_base_bdevs_discovered": 2, 00:13:05.285 "num_base_bdevs_operational": 2, 00:13:05.285 "process": { 00:13:05.285 "type": "rebuild", 00:13:05.285 "target": "spare", 00:13:05.285 "progress": { 00:13:05.285 "blocks": 12288, 00:13:05.285 "percent": 18 00:13:05.285 } 00:13:05.285 }, 00:13:05.285 "base_bdevs_list": [ 00:13:05.285 { 00:13:05.285 "name": "spare", 00:13:05.285 "uuid": "3333bfb8-e1d5-5707-a978-74b429d68d06", 00:13:05.285 "is_configured": true, 00:13:05.285 "data_offset": 0, 00:13:05.285 "data_size": 65536 00:13:05.285 }, 00:13:05.285 { 00:13:05.285 "name": "BaseBdev2", 00:13:05.285 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:05.285 "is_configured": true, 00:13:05.285 "data_offset": 0, 00:13:05.285 "data_size": 65536 00:13:05.285 } 00:13:05.285 ] 00:13:05.285 }' 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.285 12:39:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.285 [2024-12-14 12:39:04.900643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.285 [2024-12-14 12:39:04.980470] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:05.285 [2024-12-14 12:39:04.989519] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.285 [2024-12-14 12:39:04.989561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:05.285 [2024-12-14 12:39:04.989577] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:05.543 [2024-12-14 12:39:05.032356] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.543 "name": "raid_bdev1", 00:13:05.543 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:05.543 "strip_size_kb": 0, 00:13:05.543 "state": "online", 00:13:05.543 "raid_level": "raid1", 00:13:05.543 "superblock": false, 00:13:05.543 "num_base_bdevs": 2, 00:13:05.543 "num_base_bdevs_discovered": 1, 00:13:05.543 "num_base_bdevs_operational": 1, 00:13:05.543 "base_bdevs_list": [ 00:13:05.543 { 00:13:05.543 "name": null, 00:13:05.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.543 "is_configured": false, 00:13:05.543 "data_offset": 0, 00:13:05.543 "data_size": 65536 00:13:05.543 }, 00:13:05.543 { 00:13:05.543 "name": "BaseBdev2", 00:13:05.543 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:05.543 "is_configured": true, 00:13:05.543 "data_offset": 0, 00:13:05.543 "data_size": 65536 00:13:05.543 } 00:13:05.543 ] 00:13:05.543 }' 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.543 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:05.800 189.00 IOPS, 567.00 MiB/s [2024-12-14T12:39:05.538Z] 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.800 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.800 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.800 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.800 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.800 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.800 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.800 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.800 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.058 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.058 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.058 "name": "raid_bdev1", 00:13:06.058 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:06.058 "strip_size_kb": 0, 00:13:06.058 "state": "online", 00:13:06.058 "raid_level": "raid1", 00:13:06.058 "superblock": false, 00:13:06.058 "num_base_bdevs": 2, 00:13:06.058 "num_base_bdevs_discovered": 1, 00:13:06.058 "num_base_bdevs_operational": 1, 00:13:06.058 "base_bdevs_list": [ 00:13:06.058 { 00:13:06.058 "name": null, 00:13:06.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.058 "is_configured": false, 00:13:06.058 "data_offset": 0, 00:13:06.058 "data_size": 65536 00:13:06.058 }, 00:13:06.058 { 00:13:06.058 "name": "BaseBdev2", 00:13:06.058 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:06.058 "is_configured": true, 00:13:06.058 "data_offset": 0, 00:13:06.058 "data_size": 65536 00:13:06.058 } 00:13:06.058 ] 00:13:06.058 }' 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:06.059 [2024-12-14 12:39:05.651522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.059 12:39:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:06.059 [2024-12-14 12:39:05.711826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:06.059 [2024-12-14 12:39:05.714012] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:06.317 [2024-12-14 12:39:05.837603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:06.317 [2024-12-14 12:39:05.838285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:06.575 [2024-12-14 12:39:06.063179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:06.575 [2024-12-14 12:39:06.063620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:06.834 170.33 IOPS, 511.00 MiB/s [2024-12-14T12:39:06.572Z] [2024-12-14 12:39:06.435383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:06.834 [2024-12-14 12:39:06.436091] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:07.093 [2024-12-14 12:39:06.639264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:07.093 [2024-12-14 12:39:06.639720] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.093 "name": "raid_bdev1", 00:13:07.093 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:07.093 "strip_size_kb": 0, 00:13:07.093 "state": "online", 00:13:07.093 "raid_level": "raid1", 00:13:07.093 "superblock": false, 00:13:07.093 "num_base_bdevs": 2, 00:13:07.093 "num_base_bdevs_discovered": 2, 00:13:07.093 "num_base_bdevs_operational": 2, 00:13:07.093 "process": { 00:13:07.093 "type": "rebuild", 00:13:07.093 "target": "spare", 00:13:07.093 "progress": { 00:13:07.093 "blocks": 10240, 00:13:07.093 "percent": 15 00:13:07.093 } 00:13:07.093 }, 00:13:07.093 "base_bdevs_list": [ 00:13:07.093 { 00:13:07.093 "name": "spare", 00:13:07.093 "uuid": "3333bfb8-e1d5-5707-a978-74b429d68d06", 00:13:07.093 "is_configured": true, 00:13:07.093 "data_offset": 0, 00:13:07.093 "data_size": 65536 00:13:07.093 }, 00:13:07.093 { 00:13:07.093 "name": "BaseBdev2", 00:13:07.093 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:07.093 "is_configured": true, 00:13:07.093 "data_offset": 0, 00:13:07.093 "data_size": 65536 00:13:07.093 } 00:13:07.093 ] 00:13:07.093 }' 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.093 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=401 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:07.352 "name": "raid_bdev1", 00:13:07.352 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:07.352 "strip_size_kb": 0, 00:13:07.352 "state": "online", 00:13:07.352 "raid_level": "raid1", 00:13:07.352 "superblock": false, 00:13:07.352 "num_base_bdevs": 2, 00:13:07.352 "num_base_bdevs_discovered": 2, 00:13:07.352 "num_base_bdevs_operational": 2, 00:13:07.352 "process": { 00:13:07.352 "type": "rebuild", 00:13:07.352 "target": "spare", 00:13:07.352 "progress": { 00:13:07.352 "blocks": 10240, 00:13:07.352 "percent": 15 00:13:07.352 } 00:13:07.352 }, 00:13:07.352 "base_bdevs_list": [ 00:13:07.352 { 00:13:07.352 "name": "spare", 00:13:07.352 "uuid": "3333bfb8-e1d5-5707-a978-74b429d68d06", 00:13:07.352 "is_configured": true, 00:13:07.352 "data_offset": 0, 00:13:07.352 "data_size": 65536 00:13:07.352 }, 00:13:07.352 { 00:13:07.352 "name": "BaseBdev2", 00:13:07.352 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:07.352 "is_configured": true, 00:13:07.352 "data_offset": 0, 00:13:07.352 "data_size": 65536 00:13:07.352 } 00:13:07.352 ] 00:13:07.352 }' 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:07.352 12:39:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:07.352 [2024-12-14 12:39:06.965841] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:07.352 12:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:07.352 12:39:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:07.352 [2024-12-14 12:39:07.083347] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:07.352 [2024-12-14 12:39:07.083620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:07.869 143.75 IOPS, 431.25 MiB/s [2024-12-14T12:39:07.607Z] [2024-12-14 12:39:07.420063] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:08.127 [2024-12-14 12:39:07.731117] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:08.385 [2024-12-14 12:39:07.942222] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:08.385 "name": "raid_bdev1", 00:13:08.385 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:08.385 "strip_size_kb": 0, 00:13:08.385 "state": "online", 00:13:08.385 "raid_level": "raid1", 00:13:08.385 "superblock": false, 00:13:08.385 "num_base_bdevs": 2, 00:13:08.385 "num_base_bdevs_discovered": 2, 00:13:08.385 "num_base_bdevs_operational": 2, 00:13:08.385 "process": { 00:13:08.385 "type": "rebuild", 00:13:08.385 "target": "spare", 00:13:08.385 "progress": { 00:13:08.385 "blocks": 28672, 00:13:08.385 "percent": 43 00:13:08.385 } 00:13:08.385 }, 00:13:08.385 "base_bdevs_list": [ 00:13:08.385 { 00:13:08.385 "name": "spare", 00:13:08.385 "uuid": "3333bfb8-e1d5-5707-a978-74b429d68d06", 00:13:08.385 "is_configured": true, 00:13:08.385 "data_offset": 0, 00:13:08.385 "data_size": 65536 00:13:08.385 }, 00:13:08.385 { 00:13:08.385 "name": "BaseBdev2", 00:13:08.385 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:08.385 "is_configured": true, 00:13:08.385 "data_offset": 0, 00:13:08.385 "data_size": 65536 00:13:08.385 } 00:13:08.385 ] 00:13:08.385 }' 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:08.385 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:08.644 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:08.644 12:39:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:08.644 [2024-12-14 12:39:08.287030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:08.902 123.60 IOPS, 370.80 MiB/s [2024-12-14T12:39:08.640Z] [2024-12-14 12:39:08.404152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:09.160 [2024-12-14 12:39:08.753468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:09.727 [2024-12-14 12:39:09.172741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:09.727 "name": "raid_bdev1", 00:13:09.727 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:09.727 "strip_size_kb": 0, 00:13:09.727 "state": "online", 00:13:09.727 "raid_level": "raid1", 00:13:09.727 "superblock": false, 00:13:09.727 "num_base_bdevs": 2, 00:13:09.727 "num_base_bdevs_discovered": 2, 00:13:09.727 "num_base_bdevs_operational": 2, 00:13:09.727 "process": { 00:13:09.727 "type": "rebuild", 00:13:09.727 "target": "spare", 00:13:09.727 "progress": { 00:13:09.727 "blocks": 47104, 00:13:09.727 "percent": 71 00:13:09.727 } 00:13:09.727 }, 00:13:09.727 "base_bdevs_list": [ 00:13:09.727 { 00:13:09.727 "name": "spare", 00:13:09.727 "uuid": "3333bfb8-e1d5-5707-a978-74b429d68d06", 00:13:09.727 "is_configured": true, 00:13:09.727 "data_offset": 0, 00:13:09.727 "data_size": 65536 00:13:09.727 }, 00:13:09.727 { 00:13:09.727 "name": "BaseBdev2", 00:13:09.727 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:09.727 "is_configured": true, 00:13:09.727 "data_offset": 0, 00:13:09.727 "data_size": 65536 00:13:09.727 } 00:13:09.727 ] 00:13:09.727 }' 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:09.727 12:39:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:10.663 111.83 IOPS, 335.50 MiB/s [2024-12-14T12:39:10.401Z] [2024-12-14 12:39:10.117197] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:10.663 [2024-12-14 12:39:10.222550] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:10.663 [2024-12-14 12:39:10.225875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.663 99.43 IOPS, 298.29 MiB/s [2024-12-14T12:39:10.401Z] 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.663 "name": "raid_bdev1", 00:13:10.663 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:10.663 "strip_size_kb": 0, 00:13:10.663 "state": "online", 00:13:10.663 "raid_level": "raid1", 00:13:10.663 "superblock": false, 00:13:10.663 "num_base_bdevs": 2, 00:13:10.663 "num_base_bdevs_discovered": 2, 00:13:10.663 "num_base_bdevs_operational": 2, 00:13:10.663 "base_bdevs_list": [ 00:13:10.663 { 00:13:10.663 "name": "spare", 00:13:10.663 "uuid": "3333bfb8-e1d5-5707-a978-74b429d68d06", 00:13:10.663 "is_configured": true, 00:13:10.663 "data_offset": 0, 00:13:10.663 "data_size": 65536 00:13:10.663 }, 00:13:10.663 { 00:13:10.663 "name": "BaseBdev2", 00:13:10.663 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:10.663 "is_configured": true, 00:13:10.663 "data_offset": 0, 00:13:10.663 "data_size": 65536 00:13:10.663 } 00:13:10.663 ] 00:13:10.663 }' 00:13:10.663 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.925 "name": "raid_bdev1", 00:13:10.925 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:10.925 "strip_size_kb": 0, 00:13:10.925 "state": "online", 00:13:10.925 "raid_level": "raid1", 00:13:10.925 "superblock": false, 00:13:10.925 "num_base_bdevs": 2, 00:13:10.925 "num_base_bdevs_discovered": 2, 00:13:10.925 "num_base_bdevs_operational": 2, 00:13:10.925 "base_bdevs_list": [ 00:13:10.925 { 00:13:10.925 "name": "spare", 00:13:10.925 "uuid": "3333bfb8-e1d5-5707-a978-74b429d68d06", 00:13:10.925 "is_configured": true, 00:13:10.925 "data_offset": 0, 00:13:10.925 "data_size": 65536 00:13:10.925 }, 00:13:10.925 { 00:13:10.925 "name": "BaseBdev2", 00:13:10.925 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:10.925 "is_configured": true, 00:13:10.925 "data_offset": 0, 00:13:10.925 "data_size": 65536 00:13:10.925 } 00:13:10.925 ] 00:13:10.925 }' 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.925 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.926 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.189 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.189 "name": "raid_bdev1", 00:13:11.189 "uuid": "84a5f261-3eb4-47ef-8f44-bbe7ff10a6ab", 00:13:11.189 "strip_size_kb": 0, 00:13:11.189 "state": "online", 00:13:11.189 "raid_level": "raid1", 00:13:11.189 "superblock": false, 00:13:11.189 "num_base_bdevs": 2, 00:13:11.189 "num_base_bdevs_discovered": 2, 00:13:11.189 "num_base_bdevs_operational": 2, 00:13:11.189 "base_bdevs_list": [ 00:13:11.189 { 00:13:11.189 "name": "spare", 00:13:11.189 "uuid": "3333bfb8-e1d5-5707-a978-74b429d68d06", 00:13:11.189 "is_configured": true, 00:13:11.189 "data_offset": 0, 00:13:11.189 "data_size": 65536 00:13:11.189 }, 00:13:11.189 { 00:13:11.189 "name": "BaseBdev2", 00:13:11.189 "uuid": "777cb5ca-1ba5-5d83-96a2-0d3f37aba161", 00:13:11.189 "is_configured": true, 00:13:11.189 "data_offset": 0, 00:13:11.189 "data_size": 65536 00:13:11.189 } 00:13:11.189 ] 00:13:11.189 }' 00:13:11.189 12:39:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.189 12:39:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.448 [2024-12-14 12:39:11.025896] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:11.448 [2024-12-14 12:39:11.025992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.448 00:13:11.448 Latency(us) 00:13:11.448 [2024-12-14T12:39:11.186Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:11.448 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:11.448 raid_bdev1 : 7.78 92.96 278.87 0.00 0.00 14079.57 313.01 109894.43 00:13:11.448 [2024-12-14T12:39:11.186Z] =================================================================================================================== 00:13:11.448 [2024-12-14T12:39:11.186Z] Total : 92.96 278.87 0.00 0.00 14079.57 313.01 109894.43 00:13:11.448 [2024-12-14 12:39:11.121465] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.448 [2024-12-14 12:39:11.121612] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.448 [2024-12-14 12:39:11.121753] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.448 [2024-12-14 12:39:11.121821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:11.448 { 00:13:11.448 "results": [ 00:13:11.448 { 00:13:11.448 "job": "raid_bdev1", 00:13:11.448 "core_mask": "0x1", 00:13:11.448 "workload": "randrw", 00:13:11.448 "percentage": 50, 00:13:11.448 "status": "finished", 00:13:11.448 "queue_depth": 2, 00:13:11.448 "io_size": 3145728, 00:13:11.448 "runtime": 7.777902, 00:13:11.448 "iops": 92.95565822248724, 00:13:11.448 "mibps": 278.8669746674617, 00:13:11.448 "io_failed": 0, 00:13:11.448 "io_timeout": 0, 00:13:11.448 "avg_latency_us": 14079.571702090392, 00:13:11.448 "min_latency_us": 313.0131004366812, 00:13:11.448 "max_latency_us": 109894.42794759825 00:13:11.448 } 00:13:11.448 ], 00:13:11.448 "core_count": 1 00:13:11.448 } 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.448 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.707 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:11.707 /dev/nbd0 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:11.965 1+0 records in 00:13:11.965 1+0 records out 00:13:11.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360861 s, 11.4 MB/s 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:11.965 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:12.223 /dev/nbd1 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:12.223 1+0 records in 00:13:12.223 1+0 records out 00:13:12.223 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557312 s, 7.3 MB/s 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:12.223 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.224 12:39:11 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:12.481 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78228 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78228 ']' 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78228 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.739 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78228 00:13:12.997 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.997 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.997 killing process with pid 78228 00:13:12.997 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78228' 00:13:12.997 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78228 00:13:12.997 Received shutdown signal, test time was about 9.173822 seconds 00:13:12.997 00:13:12.997 Latency(us) 00:13:12.997 [2024-12-14T12:39:12.735Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.997 [2024-12-14T12:39:12.735Z] =================================================================================================================== 00:13:12.997 [2024-12-14T12:39:12.735Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:12.997 [2024-12-14 12:39:12.486943] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.997 12:39:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78228 00:13:13.254 [2024-12-14 12:39:12.758489] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:14.629 00:13:14.629 real 0m12.569s 00:13:14.629 user 0m15.935s 00:13:14.629 sys 0m1.460s 00:13:14.629 ************************************ 00:13:14.629 END TEST raid_rebuild_test_io 00:13:14.629 ************************************ 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.629 12:39:14 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:14.629 12:39:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:14.629 12:39:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:14.629 12:39:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.629 ************************************ 00:13:14.629 START TEST raid_rebuild_test_sb_io 00:13:14.629 ************************************ 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78611 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78611 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78611 ']' 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.629 12:39:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:14.629 [2024-12-14 12:39:14.312148] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:14.629 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:14.629 Zero copy mechanism will not be used. 00:13:14.629 [2024-12-14 12:39:14.312358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78611 ] 00:13:14.887 [2024-12-14 12:39:14.490953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.145 [2024-12-14 12:39:14.623692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:15.145 [2024-12-14 12:39:14.866855] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.145 [2024-12-14 12:39:14.867007] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 BaseBdev1_malloc 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 [2024-12-14 12:39:15.251983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:15.711 [2024-12-14 12:39:15.252081] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.711 [2024-12-14 12:39:15.252106] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:15.711 [2024-12-14 12:39:15.252120] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.711 [2024-12-14 12:39:15.254502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.711 [2024-12-14 12:39:15.254548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.711 BaseBdev1 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 BaseBdev2_malloc 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 [2024-12-14 12:39:15.314872] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:15.711 [2024-12-14 12:39:15.314942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.711 [2024-12-14 12:39:15.314964] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:15.711 [2024-12-14 12:39:15.314978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.711 [2024-12-14 12:39:15.317344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.711 [2024-12-14 12:39:15.317388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.711 BaseBdev2 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 spare_malloc 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 spare_delay 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 [2024-12-14 12:39:15.400344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:15.711 [2024-12-14 12:39:15.400408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.711 [2024-12-14 12:39:15.400447] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:15.711 [2024-12-14 12:39:15.400460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.711 [2024-12-14 12:39:15.402862] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.711 [2024-12-14 12:39:15.402909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:15.711 spare 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.711 [2024-12-14 12:39:15.412386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.711 [2024-12-14 12:39:15.414405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.711 [2024-12-14 12:39:15.414664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:15.711 [2024-12-14 12:39:15.414687] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:15.711 [2024-12-14 12:39:15.414961] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:15.711 [2024-12-14 12:39:15.415167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:15.711 [2024-12-14 12:39:15.415179] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:15.711 [2024-12-14 12:39:15.415342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.711 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.712 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.970 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.970 "name": "raid_bdev1", 00:13:15.970 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:15.970 "strip_size_kb": 0, 00:13:15.970 "state": "online", 00:13:15.970 "raid_level": "raid1", 00:13:15.970 "superblock": true, 00:13:15.970 "num_base_bdevs": 2, 00:13:15.970 "num_base_bdevs_discovered": 2, 00:13:15.970 "num_base_bdevs_operational": 2, 00:13:15.970 "base_bdevs_list": [ 00:13:15.970 { 00:13:15.970 "name": "BaseBdev1", 00:13:15.970 "uuid": "d938cf3a-6ca2-565c-aff8-7d893e3a8ca8", 00:13:15.970 "is_configured": true, 00:13:15.970 "data_offset": 2048, 00:13:15.970 "data_size": 63488 00:13:15.970 }, 00:13:15.970 { 00:13:15.970 "name": "BaseBdev2", 00:13:15.970 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:15.970 "is_configured": true, 00:13:15.970 "data_offset": 2048, 00:13:15.970 "data_size": 63488 00:13:15.970 } 00:13:15.970 ] 00:13:15.970 }' 00:13:15.970 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.970 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.229 [2024-12-14 12:39:15.891948] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.229 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.488 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:16.488 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:16.488 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.489 [2024-12-14 12:39:15.987421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.489 12:39:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.489 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.489 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.489 "name": "raid_bdev1", 00:13:16.489 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:16.489 "strip_size_kb": 0, 00:13:16.489 "state": "online", 00:13:16.489 "raid_level": "raid1", 00:13:16.489 "superblock": true, 00:13:16.489 "num_base_bdevs": 2, 00:13:16.489 "num_base_bdevs_discovered": 1, 00:13:16.489 "num_base_bdevs_operational": 1, 00:13:16.489 "base_bdevs_list": [ 00:13:16.489 { 00:13:16.489 "name": null, 00:13:16.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.489 "is_configured": false, 00:13:16.489 "data_offset": 0, 00:13:16.489 "data_size": 63488 00:13:16.489 }, 00:13:16.489 { 00:13:16.489 "name": "BaseBdev2", 00:13:16.489 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:16.489 "is_configured": true, 00:13:16.489 "data_offset": 2048, 00:13:16.489 "data_size": 63488 00:13:16.489 } 00:13:16.489 ] 00:13:16.489 }' 00:13:16.489 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.489 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.489 [2024-12-14 12:39:16.083427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:16.489 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:16.489 Zero copy mechanism will not be used. 00:13:16.489 Running I/O for 60 seconds... 00:13:16.749 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.749 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.749 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.749 [2024-12-14 12:39:16.392357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.749 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.749 12:39:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:16.749 [2024-12-14 12:39:16.435884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:16.749 [2024-12-14 12:39:16.437735] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:17.008 [2024-12-14 12:39:16.550131] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.008 [2024-12-14 12:39:16.550811] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:17.008 [2024-12-14 12:39:16.693000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:17.578 [2024-12-14 12:39:17.018102] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.578 [2024-12-14 12:39:17.018807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:17.578 224.00 IOPS, 672.00 MiB/s [2024-12-14T12:39:17.316Z] [2024-12-14 12:39:17.133192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.837 [2024-12-14 12:39:17.451502] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:17.837 [2024-12-14 12:39:17.452177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.837 "name": "raid_bdev1", 00:13:17.837 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:17.837 "strip_size_kb": 0, 00:13:17.837 "state": "online", 00:13:17.837 "raid_level": "raid1", 00:13:17.837 "superblock": true, 00:13:17.837 "num_base_bdevs": 2, 00:13:17.837 "num_base_bdevs_discovered": 2, 00:13:17.837 "num_base_bdevs_operational": 2, 00:13:17.837 "process": { 00:13:17.837 "type": "rebuild", 00:13:17.837 "target": "spare", 00:13:17.837 "progress": { 00:13:17.837 "blocks": 12288, 00:13:17.837 "percent": 19 00:13:17.837 } 00:13:17.837 }, 00:13:17.837 "base_bdevs_list": [ 00:13:17.837 { 00:13:17.837 "name": "spare", 00:13:17.837 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:17.837 "is_configured": true, 00:13:17.837 "data_offset": 2048, 00:13:17.837 "data_size": 63488 00:13:17.837 }, 00:13:17.837 { 00:13:17.837 "name": "BaseBdev2", 00:13:17.837 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:17.837 "is_configured": true, 00:13:17.837 "data_offset": 2048, 00:13:17.837 "data_size": 63488 00:13:17.837 } 00:13:17.837 ] 00:13:17.837 }' 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.837 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.097 [2024-12-14 12:39:17.600806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.097 [2024-12-14 12:39:17.727753] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:18.097 [2024-12-14 12:39:17.740805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.097 [2024-12-14 12:39:17.740940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:18.097 [2024-12-14 12:39:17.740974] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:18.097 [2024-12-14 12:39:17.776704] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.097 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.357 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:18.357 "name": "raid_bdev1", 00:13:18.357 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:18.357 "strip_size_kb": 0, 00:13:18.357 "state": "online", 00:13:18.357 "raid_level": "raid1", 00:13:18.357 "superblock": true, 00:13:18.357 "num_base_bdevs": 2, 00:13:18.357 "num_base_bdevs_discovered": 1, 00:13:18.357 "num_base_bdevs_operational": 1, 00:13:18.357 "base_bdevs_list": [ 00:13:18.357 { 00:13:18.357 "name": null, 00:13:18.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.357 "is_configured": false, 00:13:18.357 "data_offset": 0, 00:13:18.357 "data_size": 63488 00:13:18.357 }, 00:13:18.357 { 00:13:18.357 "name": "BaseBdev2", 00:13:18.357 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:18.357 "is_configured": true, 00:13:18.357 "data_offset": 2048, 00:13:18.357 "data_size": 63488 00:13:18.357 } 00:13:18.357 ] 00:13:18.357 }' 00:13:18.357 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:18.357 12:39:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.616 179.00 IOPS, 537.00 MiB/s [2024-12-14T12:39:18.354Z] 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.616 "name": "raid_bdev1", 00:13:18.616 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:18.616 "strip_size_kb": 0, 00:13:18.616 "state": "online", 00:13:18.616 "raid_level": "raid1", 00:13:18.616 "superblock": true, 00:13:18.616 "num_base_bdevs": 2, 00:13:18.616 "num_base_bdevs_discovered": 1, 00:13:18.616 "num_base_bdevs_operational": 1, 00:13:18.616 "base_bdevs_list": [ 00:13:18.616 { 00:13:18.616 "name": null, 00:13:18.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.616 "is_configured": false, 00:13:18.616 "data_offset": 0, 00:13:18.616 "data_size": 63488 00:13:18.616 }, 00:13:18.616 { 00:13:18.616 "name": "BaseBdev2", 00:13:18.616 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:18.616 "is_configured": true, 00:13:18.616 "data_offset": 2048, 00:13:18.616 "data_size": 63488 00:13:18.616 } 00:13:18.616 ] 00:13:18.616 }' 00:13:18.616 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.876 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.876 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.876 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.876 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.876 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.876 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:18.876 [2024-12-14 12:39:18.414779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.876 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.876 12:39:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.876 [2024-12-14 12:39:18.480358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:18.876 [2024-12-14 12:39:18.482209] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:18.876 [2024-12-14 12:39:18.589378] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:18.876 [2024-12-14 12:39:18.589940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:19.136 [2024-12-14 12:39:18.699597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:19.136 [2024-12-14 12:39:18.699940] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:19.654 180.33 IOPS, 541.00 MiB/s [2024-12-14T12:39:19.393Z] [2024-12-14 12:39:19.136850] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.914 [2024-12-14 12:39:19.466583] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.914 "name": "raid_bdev1", 00:13:19.914 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:19.914 "strip_size_kb": 0, 00:13:19.914 "state": "online", 00:13:19.914 "raid_level": "raid1", 00:13:19.914 "superblock": true, 00:13:19.914 "num_base_bdevs": 2, 00:13:19.914 "num_base_bdevs_discovered": 2, 00:13:19.914 "num_base_bdevs_operational": 2, 00:13:19.914 "process": { 00:13:19.914 "type": "rebuild", 00:13:19.914 "target": "spare", 00:13:19.914 "progress": { 00:13:19.914 "blocks": 14336, 00:13:19.914 "percent": 22 00:13:19.914 } 00:13:19.914 }, 00:13:19.914 "base_bdevs_list": [ 00:13:19.914 { 00:13:19.914 "name": "spare", 00:13:19.914 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:19.914 "is_configured": true, 00:13:19.914 "data_offset": 2048, 00:13:19.914 "data_size": 63488 00:13:19.914 }, 00:13:19.914 { 00:13:19.914 "name": "BaseBdev2", 00:13:19.914 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:19.914 "is_configured": true, 00:13:19.914 "data_offset": 2048, 00:13:19.914 "data_size": 63488 00:13:19.914 } 00:13:19.914 ] 00:13:19.914 }' 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:19.914 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=414 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.914 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.173 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.173 "name": "raid_bdev1", 00:13:20.173 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:20.173 "strip_size_kb": 0, 00:13:20.173 "state": "online", 00:13:20.173 "raid_level": "raid1", 00:13:20.173 "superblock": true, 00:13:20.173 "num_base_bdevs": 2, 00:13:20.173 "num_base_bdevs_discovered": 2, 00:13:20.173 "num_base_bdevs_operational": 2, 00:13:20.173 "process": { 00:13:20.173 "type": "rebuild", 00:13:20.173 "target": "spare", 00:13:20.173 "progress": { 00:13:20.173 "blocks": 18432, 00:13:20.173 "percent": 29 00:13:20.173 } 00:13:20.173 }, 00:13:20.173 "base_bdevs_list": [ 00:13:20.173 { 00:13:20.173 "name": "spare", 00:13:20.173 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:20.173 "is_configured": true, 00:13:20.173 "data_offset": 2048, 00:13:20.173 "data_size": 63488 00:13:20.173 }, 00:13:20.173 { 00:13:20.173 "name": "BaseBdev2", 00:13:20.173 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:20.173 "is_configured": true, 00:13:20.173 "data_offset": 2048, 00:13:20.173 "data_size": 63488 00:13:20.173 } 00:13:20.173 ] 00:13:20.173 }' 00:13:20.173 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.173 [2024-12-14 12:39:19.695675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:20.173 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.173 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.173 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.173 12:39:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.436 [2024-12-14 12:39:19.922026] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:20.436 [2024-12-14 12:39:19.922455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:20.695 152.00 IOPS, 456.00 MiB/s [2024-12-14T12:39:20.433Z] [2024-12-14 12:39:20.240714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:20.695 [2024-12-14 12:39:20.342283] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:20.695 [2024-12-14 12:39:20.342689] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:21.264 [2024-12-14 12:39:20.723122] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:21.264 "name": "raid_bdev1", 00:13:21.264 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:21.264 "strip_size_kb": 0, 00:13:21.264 "state": "online", 00:13:21.264 "raid_level": "raid1", 00:13:21.264 "superblock": true, 00:13:21.264 "num_base_bdevs": 2, 00:13:21.264 "num_base_bdevs_discovered": 2, 00:13:21.264 "num_base_bdevs_operational": 2, 00:13:21.264 "process": { 00:13:21.264 "type": "rebuild", 00:13:21.264 "target": "spare", 00:13:21.264 "progress": { 00:13:21.264 "blocks": 34816, 00:13:21.264 "percent": 54 00:13:21.264 } 00:13:21.264 }, 00:13:21.264 "base_bdevs_list": [ 00:13:21.264 { 00:13:21.264 "name": "spare", 00:13:21.264 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:21.264 "is_configured": true, 00:13:21.264 "data_offset": 2048, 00:13:21.264 "data_size": 63488 00:13:21.264 }, 00:13:21.264 { 00:13:21.264 "name": "BaseBdev2", 00:13:21.264 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:21.264 "is_configured": true, 00:13:21.264 "data_offset": 2048, 00:13:21.264 "data_size": 63488 00:13:21.264 } 00:13:21.264 ] 00:13:21.264 }' 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:21.264 12:39:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.524 [2024-12-14 12:39:21.042457] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:22.093 135.40 IOPS, 406.20 MiB/s [2024-12-14T12:39:21.831Z] [2024-12-14 12:39:21.790474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:22.093 [2024-12-14 12:39:21.791138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.353 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.353 "name": "raid_bdev1", 00:13:22.353 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:22.353 "strip_size_kb": 0, 00:13:22.353 "state": "online", 00:13:22.353 "raid_level": "raid1", 00:13:22.353 "superblock": true, 00:13:22.353 "num_base_bdevs": 2, 00:13:22.353 "num_base_bdevs_discovered": 2, 00:13:22.353 "num_base_bdevs_operational": 2, 00:13:22.353 "process": { 00:13:22.353 "type": "rebuild", 00:13:22.353 "target": "spare", 00:13:22.353 "progress": { 00:13:22.354 "blocks": 51200, 00:13:22.354 "percent": 80 00:13:22.354 } 00:13:22.354 }, 00:13:22.354 "base_bdevs_list": [ 00:13:22.354 { 00:13:22.354 "name": "spare", 00:13:22.354 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:22.354 "is_configured": true, 00:13:22.354 "data_offset": 2048, 00:13:22.354 "data_size": 63488 00:13:22.354 }, 00:13:22.354 { 00:13:22.354 "name": "BaseBdev2", 00:13:22.354 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:22.354 "is_configured": true, 00:13:22.354 "data_offset": 2048, 00:13:22.354 "data_size": 63488 00:13:22.354 } 00:13:22.354 ] 00:13:22.354 }' 00:13:22.354 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.354 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.354 12:39:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.354 12:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.354 12:39:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:22.612 119.67 IOPS, 359.00 MiB/s [2024-12-14T12:39:22.350Z] [2024-12-14 12:39:22.125540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:22.871 [2024-12-14 12:39:22.463200] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:22.871 [2024-12-14 12:39:22.569549] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:22.871 [2024-12-14 12:39:22.572754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.441 106.86 IOPS, 320.57 MiB/s [2024-12-14T12:39:23.179Z] 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.441 "name": "raid_bdev1", 00:13:23.441 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:23.441 "strip_size_kb": 0, 00:13:23.441 "state": "online", 00:13:23.441 "raid_level": "raid1", 00:13:23.441 "superblock": true, 00:13:23.441 "num_base_bdevs": 2, 00:13:23.441 "num_base_bdevs_discovered": 2, 00:13:23.441 "num_base_bdevs_operational": 2, 00:13:23.441 "base_bdevs_list": [ 00:13:23.441 { 00:13:23.441 "name": "spare", 00:13:23.441 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:23.441 "is_configured": true, 00:13:23.441 "data_offset": 2048, 00:13:23.441 "data_size": 63488 00:13:23.441 }, 00:13:23.441 { 00:13:23.441 "name": "BaseBdev2", 00:13:23.441 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:23.441 "is_configured": true, 00:13:23.441 "data_offset": 2048, 00:13:23.441 "data_size": 63488 00:13:23.441 } 00:13:23.441 ] 00:13:23.441 }' 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:23.441 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.700 "name": "raid_bdev1", 00:13:23.700 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:23.700 "strip_size_kb": 0, 00:13:23.700 "state": "online", 00:13:23.700 "raid_level": "raid1", 00:13:23.700 "superblock": true, 00:13:23.700 "num_base_bdevs": 2, 00:13:23.700 "num_base_bdevs_discovered": 2, 00:13:23.700 "num_base_bdevs_operational": 2, 00:13:23.700 "base_bdevs_list": [ 00:13:23.700 { 00:13:23.700 "name": "spare", 00:13:23.700 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:23.700 "is_configured": true, 00:13:23.700 "data_offset": 2048, 00:13:23.700 "data_size": 63488 00:13:23.700 }, 00:13:23.700 { 00:13:23.700 "name": "BaseBdev2", 00:13:23.700 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:23.700 "is_configured": true, 00:13:23.700 "data_offset": 2048, 00:13:23.700 "data_size": 63488 00:13:23.700 } 00:13:23.700 ] 00:13:23.700 }' 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.700 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.700 "name": "raid_bdev1", 00:13:23.700 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:23.700 "strip_size_kb": 0, 00:13:23.700 "state": "online", 00:13:23.700 "raid_level": "raid1", 00:13:23.700 "superblock": true, 00:13:23.700 "num_base_bdevs": 2, 00:13:23.700 "num_base_bdevs_discovered": 2, 00:13:23.700 "num_base_bdevs_operational": 2, 00:13:23.700 "base_bdevs_list": [ 00:13:23.700 { 00:13:23.700 "name": "spare", 00:13:23.700 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:23.700 "is_configured": true, 00:13:23.700 "data_offset": 2048, 00:13:23.700 "data_size": 63488 00:13:23.700 }, 00:13:23.700 { 00:13:23.700 "name": "BaseBdev2", 00:13:23.701 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:23.701 "is_configured": true, 00:13:23.701 "data_offset": 2048, 00:13:23.701 "data_size": 63488 00:13:23.701 } 00:13:23.701 ] 00:13:23.701 }' 00:13:23.701 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.701 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.269 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:24.269 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.269 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.269 [2024-12-14 12:39:23.734222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.269 [2024-12-14 12:39:23.734261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.269 00:13:24.269 Latency(us) 00:13:24.269 [2024-12-14T12:39:24.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:24.269 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:24.270 raid_bdev1 : 7.69 102.58 307.75 0.00 0.00 13124.11 300.49 115389.15 00:13:24.270 [2024-12-14T12:39:24.008Z] =================================================================================================================== 00:13:24.270 [2024-12-14T12:39:24.008Z] Total : 102.58 307.75 0.00 0.00 13124.11 300.49 115389.15 00:13:24.270 [2024-12-14 12:39:23.783962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.270 [2024-12-14 12:39:23.784049] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:24.270 [2024-12-14 12:39:23.784149] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.270 [2024-12-14 12:39:23.784164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:24.270 { 00:13:24.270 "results": [ 00:13:24.270 { 00:13:24.270 "job": "raid_bdev1", 00:13:24.270 "core_mask": "0x1", 00:13:24.270 "workload": "randrw", 00:13:24.270 "percentage": 50, 00:13:24.270 "status": "finished", 00:13:24.270 "queue_depth": 2, 00:13:24.270 "io_size": 3145728, 00:13:24.270 "runtime": 7.691418, 00:13:24.270 "iops": 102.58186461846176, 00:13:24.270 "mibps": 307.7455938553853, 00:13:24.270 "io_failed": 0, 00:13:24.270 "io_timeout": 0, 00:13:24.270 "avg_latency_us": 13124.107128032278, 00:13:24.270 "min_latency_us": 300.49257641921395, 00:13:24.270 "max_latency_us": 115389.14934497817 00:13:24.270 } 00:13:24.270 ], 00:13:24.270 "core_count": 1 00:13:24.270 } 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.270 12:39:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:24.529 /dev/nbd0 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.529 1+0 records in 00:13:24.529 1+0 records out 00:13:24.529 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419504 s, 9.8 MB/s 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.529 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:24.790 /dev/nbd1 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.790 1+0 records in 00:13:24.790 1+0 records out 00:13:24.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020787 s, 19.7 MB/s 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.790 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.050 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.310 [2024-12-14 12:39:24.943981] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:25.310 [2024-12-14 12:39:24.944036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.310 [2024-12-14 12:39:24.944081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:25.310 [2024-12-14 12:39:24.944092] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.310 [2024-12-14 12:39:24.946284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.310 [2024-12-14 12:39:24.946322] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:25.310 [2024-12-14 12:39:24.946430] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:25.310 [2024-12-14 12:39:24.946479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:25.310 [2024-12-14 12:39:24.946633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.310 spare 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.310 12:39:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.569 [2024-12-14 12:39:25.046553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:25.569 [2024-12-14 12:39:25.046586] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:25.569 [2024-12-14 12:39:25.046910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:25.569 [2024-12-14 12:39:25.047154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:25.569 [2024-12-14 12:39:25.047185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:25.569 [2024-12-14 12:39:25.047387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.569 "name": "raid_bdev1", 00:13:25.569 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:25.569 "strip_size_kb": 0, 00:13:25.569 "state": "online", 00:13:25.569 "raid_level": "raid1", 00:13:25.569 "superblock": true, 00:13:25.569 "num_base_bdevs": 2, 00:13:25.569 "num_base_bdevs_discovered": 2, 00:13:25.569 "num_base_bdevs_operational": 2, 00:13:25.569 "base_bdevs_list": [ 00:13:25.569 { 00:13:25.569 "name": "spare", 00:13:25.569 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:25.569 "is_configured": true, 00:13:25.569 "data_offset": 2048, 00:13:25.569 "data_size": 63488 00:13:25.569 }, 00:13:25.569 { 00:13:25.569 "name": "BaseBdev2", 00:13:25.569 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:25.569 "is_configured": true, 00:13:25.569 "data_offset": 2048, 00:13:25.569 "data_size": 63488 00:13:25.569 } 00:13:25.569 ] 00:13:25.569 }' 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.569 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:25.828 "name": "raid_bdev1", 00:13:25.828 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:25.828 "strip_size_kb": 0, 00:13:25.828 "state": "online", 00:13:25.828 "raid_level": "raid1", 00:13:25.828 "superblock": true, 00:13:25.828 "num_base_bdevs": 2, 00:13:25.828 "num_base_bdevs_discovered": 2, 00:13:25.828 "num_base_bdevs_operational": 2, 00:13:25.828 "base_bdevs_list": [ 00:13:25.828 { 00:13:25.828 "name": "spare", 00:13:25.828 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:25.828 "is_configured": true, 00:13:25.828 "data_offset": 2048, 00:13:25.828 "data_size": 63488 00:13:25.828 }, 00:13:25.828 { 00:13:25.828 "name": "BaseBdev2", 00:13:25.828 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:25.828 "is_configured": true, 00:13:25.828 "data_offset": 2048, 00:13:25.828 "data_size": 63488 00:13:25.828 } 00:13:25.828 ] 00:13:25.828 }' 00:13:25.828 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 [2024-12-14 12:39:25.670865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.087 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.087 "name": "raid_bdev1", 00:13:26.087 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:26.087 "strip_size_kb": 0, 00:13:26.087 "state": "online", 00:13:26.087 "raid_level": "raid1", 00:13:26.087 "superblock": true, 00:13:26.087 "num_base_bdevs": 2, 00:13:26.087 "num_base_bdevs_discovered": 1, 00:13:26.087 "num_base_bdevs_operational": 1, 00:13:26.087 "base_bdevs_list": [ 00:13:26.087 { 00:13:26.087 "name": null, 00:13:26.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.087 "is_configured": false, 00:13:26.087 "data_offset": 0, 00:13:26.087 "data_size": 63488 00:13:26.087 }, 00:13:26.087 { 00:13:26.087 "name": "BaseBdev2", 00:13:26.087 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:26.087 "is_configured": true, 00:13:26.087 "data_offset": 2048, 00:13:26.087 "data_size": 63488 00:13:26.087 } 00:13:26.087 ] 00:13:26.087 }' 00:13:26.088 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.088 12:39:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.656 12:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:26.656 12:39:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.656 12:39:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.656 [2024-12-14 12:39:26.162179] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.656 [2024-12-14 12:39:26.162410] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:26.656 [2024-12-14 12:39:26.162425] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:26.656 [2024-12-14 12:39:26.162463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.656 [2024-12-14 12:39:26.178447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:26.656 12:39:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.656 12:39:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:26.656 [2024-12-14 12:39:26.180335] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.595 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.595 "name": "raid_bdev1", 00:13:27.595 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:27.595 "strip_size_kb": 0, 00:13:27.595 "state": "online", 00:13:27.595 "raid_level": "raid1", 00:13:27.595 "superblock": true, 00:13:27.595 "num_base_bdevs": 2, 00:13:27.595 "num_base_bdevs_discovered": 2, 00:13:27.595 "num_base_bdevs_operational": 2, 00:13:27.595 "process": { 00:13:27.595 "type": "rebuild", 00:13:27.596 "target": "spare", 00:13:27.596 "progress": { 00:13:27.596 "blocks": 20480, 00:13:27.596 "percent": 32 00:13:27.596 } 00:13:27.596 }, 00:13:27.596 "base_bdevs_list": [ 00:13:27.596 { 00:13:27.596 "name": "spare", 00:13:27.596 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:27.596 "is_configured": true, 00:13:27.596 "data_offset": 2048, 00:13:27.596 "data_size": 63488 00:13:27.596 }, 00:13:27.596 { 00:13:27.596 "name": "BaseBdev2", 00:13:27.596 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:27.596 "is_configured": true, 00:13:27.596 "data_offset": 2048, 00:13:27.596 "data_size": 63488 00:13:27.596 } 00:13:27.596 ] 00:13:27.596 }' 00:13:27.596 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.596 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.596 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.596 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.596 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.596 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.596 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.596 [2024-12-14 12:39:27.307910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.855 [2024-12-14 12:39:27.385332] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.855 [2024-12-14 12:39:27.385411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.855 [2024-12-14 12:39:27.385429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.855 [2024-12-14 12:39:27.385435] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:27.855 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.855 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:27.855 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.855 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.856 "name": "raid_bdev1", 00:13:27.856 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:27.856 "strip_size_kb": 0, 00:13:27.856 "state": "online", 00:13:27.856 "raid_level": "raid1", 00:13:27.856 "superblock": true, 00:13:27.856 "num_base_bdevs": 2, 00:13:27.856 "num_base_bdevs_discovered": 1, 00:13:27.856 "num_base_bdevs_operational": 1, 00:13:27.856 "base_bdevs_list": [ 00:13:27.856 { 00:13:27.856 "name": null, 00:13:27.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.856 "is_configured": false, 00:13:27.856 "data_offset": 0, 00:13:27.856 "data_size": 63488 00:13:27.856 }, 00:13:27.856 { 00:13:27.856 "name": "BaseBdev2", 00:13:27.856 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:27.856 "is_configured": true, 00:13:27.856 "data_offset": 2048, 00:13:27.856 "data_size": 63488 00:13:27.856 } 00:13:27.856 ] 00:13:27.856 }' 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.856 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.425 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:28.425 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.425 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.425 [2024-12-14 12:39:27.858999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:28.425 [2024-12-14 12:39:27.859089] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.425 [2024-12-14 12:39:27.859116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:28.425 [2024-12-14 12:39:27.859127] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.425 [2024-12-14 12:39:27.859657] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.425 [2024-12-14 12:39:27.859686] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:28.425 [2024-12-14 12:39:27.859804] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:28.425 [2024-12-14 12:39:27.859824] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:28.425 [2024-12-14 12:39:27.859836] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:28.425 [2024-12-14 12:39:27.859857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:28.425 [2024-12-14 12:39:27.875933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:28.425 spare 00:13:28.425 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.425 12:39:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:28.425 [2024-12-14 12:39:27.877979] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.364 "name": "raid_bdev1", 00:13:29.364 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:29.364 "strip_size_kb": 0, 00:13:29.364 "state": "online", 00:13:29.364 "raid_level": "raid1", 00:13:29.364 "superblock": true, 00:13:29.364 "num_base_bdevs": 2, 00:13:29.364 "num_base_bdevs_discovered": 2, 00:13:29.364 "num_base_bdevs_operational": 2, 00:13:29.364 "process": { 00:13:29.364 "type": "rebuild", 00:13:29.364 "target": "spare", 00:13:29.364 "progress": { 00:13:29.364 "blocks": 20480, 00:13:29.364 "percent": 32 00:13:29.364 } 00:13:29.364 }, 00:13:29.364 "base_bdevs_list": [ 00:13:29.364 { 00:13:29.364 "name": "spare", 00:13:29.364 "uuid": "5d3eab38-31f7-5194-b281-855413640759", 00:13:29.364 "is_configured": true, 00:13:29.364 "data_offset": 2048, 00:13:29.364 "data_size": 63488 00:13:29.364 }, 00:13:29.364 { 00:13:29.364 "name": "BaseBdev2", 00:13:29.364 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:29.364 "is_configured": true, 00:13:29.364 "data_offset": 2048, 00:13:29.364 "data_size": 63488 00:13:29.364 } 00:13:29.364 ] 00:13:29.364 }' 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:29.364 12:39:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.364 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:29.364 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:29.364 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.364 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.364 [2024-12-14 12:39:29.037639] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.364 [2024-12-14 12:39:29.083129] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.364 [2024-12-14 12:39:29.083194] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.364 [2024-12-14 12:39:29.083210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.364 [2024-12-14 12:39:29.083223] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.623 "name": "raid_bdev1", 00:13:29.623 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:29.623 "strip_size_kb": 0, 00:13:29.623 "state": "online", 00:13:29.623 "raid_level": "raid1", 00:13:29.623 "superblock": true, 00:13:29.623 "num_base_bdevs": 2, 00:13:29.623 "num_base_bdevs_discovered": 1, 00:13:29.623 "num_base_bdevs_operational": 1, 00:13:29.623 "base_bdevs_list": [ 00:13:29.623 { 00:13:29.623 "name": null, 00:13:29.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.623 "is_configured": false, 00:13:29.623 "data_offset": 0, 00:13:29.623 "data_size": 63488 00:13:29.623 }, 00:13:29.623 { 00:13:29.623 "name": "BaseBdev2", 00:13:29.623 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:29.623 "is_configured": true, 00:13:29.623 "data_offset": 2048, 00:13:29.623 "data_size": 63488 00:13:29.623 } 00:13:29.623 ] 00:13:29.623 }' 00:13:29.623 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.624 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.883 "name": "raid_bdev1", 00:13:29.883 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:29.883 "strip_size_kb": 0, 00:13:29.883 "state": "online", 00:13:29.883 "raid_level": "raid1", 00:13:29.883 "superblock": true, 00:13:29.883 "num_base_bdevs": 2, 00:13:29.883 "num_base_bdevs_discovered": 1, 00:13:29.883 "num_base_bdevs_operational": 1, 00:13:29.883 "base_bdevs_list": [ 00:13:29.883 { 00:13:29.883 "name": null, 00:13:29.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.883 "is_configured": false, 00:13:29.883 "data_offset": 0, 00:13:29.883 "data_size": 63488 00:13:29.883 }, 00:13:29.883 { 00:13:29.883 "name": "BaseBdev2", 00:13:29.883 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:29.883 "is_configured": true, 00:13:29.883 "data_offset": 2048, 00:13:29.883 "data_size": 63488 00:13:29.883 } 00:13:29.883 ] 00:13:29.883 }' 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.883 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.143 [2024-12-14 12:39:29.670475] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:30.143 [2024-12-14 12:39:29.670533] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.143 [2024-12-14 12:39:29.670553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:30.143 [2024-12-14 12:39:29.670565] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.143 [2024-12-14 12:39:29.671024] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.143 [2024-12-14 12:39:29.671065] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:30.143 [2024-12-14 12:39:29.671151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:30.143 [2024-12-14 12:39:29.671173] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:30.143 [2024-12-14 12:39:29.671181] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:30.143 [2024-12-14 12:39:29.671195] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:30.143 BaseBdev1 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.143 12:39:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.081 "name": "raid_bdev1", 00:13:31.081 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:31.081 "strip_size_kb": 0, 00:13:31.081 "state": "online", 00:13:31.081 "raid_level": "raid1", 00:13:31.081 "superblock": true, 00:13:31.081 "num_base_bdevs": 2, 00:13:31.081 "num_base_bdevs_discovered": 1, 00:13:31.081 "num_base_bdevs_operational": 1, 00:13:31.081 "base_bdevs_list": [ 00:13:31.081 { 00:13:31.081 "name": null, 00:13:31.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.081 "is_configured": false, 00:13:31.081 "data_offset": 0, 00:13:31.081 "data_size": 63488 00:13:31.081 }, 00:13:31.081 { 00:13:31.081 "name": "BaseBdev2", 00:13:31.081 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:31.081 "is_configured": true, 00:13:31.081 "data_offset": 2048, 00:13:31.081 "data_size": 63488 00:13:31.081 } 00:13:31.081 ] 00:13:31.081 }' 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.081 12:39:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.651 "name": "raid_bdev1", 00:13:31.651 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:31.651 "strip_size_kb": 0, 00:13:31.651 "state": "online", 00:13:31.651 "raid_level": "raid1", 00:13:31.651 "superblock": true, 00:13:31.651 "num_base_bdevs": 2, 00:13:31.651 "num_base_bdevs_discovered": 1, 00:13:31.651 "num_base_bdevs_operational": 1, 00:13:31.651 "base_bdevs_list": [ 00:13:31.651 { 00:13:31.651 "name": null, 00:13:31.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.651 "is_configured": false, 00:13:31.651 "data_offset": 0, 00:13:31.651 "data_size": 63488 00:13:31.651 }, 00:13:31.651 { 00:13:31.651 "name": "BaseBdev2", 00:13:31.651 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:31.651 "is_configured": true, 00:13:31.651 "data_offset": 2048, 00:13:31.651 "data_size": 63488 00:13:31.651 } 00:13:31.651 ] 00:13:31.651 }' 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.651 [2024-12-14 12:39:31.255997] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:31.651 [2024-12-14 12:39:31.256195] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:31.651 [2024-12-14 12:39:31.256208] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:31.651 request: 00:13:31.651 { 00:13:31.651 "base_bdev": "BaseBdev1", 00:13:31.651 "raid_bdev": "raid_bdev1", 00:13:31.651 "method": "bdev_raid_add_base_bdev", 00:13:31.651 "req_id": 1 00:13:31.651 } 00:13:31.651 Got JSON-RPC error response 00:13:31.651 response: 00:13:31.651 { 00:13:31.651 "code": -22, 00:13:31.651 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:31.651 } 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:31.651 12:39:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.623 "name": "raid_bdev1", 00:13:32.623 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:32.623 "strip_size_kb": 0, 00:13:32.623 "state": "online", 00:13:32.623 "raid_level": "raid1", 00:13:32.623 "superblock": true, 00:13:32.623 "num_base_bdevs": 2, 00:13:32.623 "num_base_bdevs_discovered": 1, 00:13:32.623 "num_base_bdevs_operational": 1, 00:13:32.623 "base_bdevs_list": [ 00:13:32.623 { 00:13:32.623 "name": null, 00:13:32.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.623 "is_configured": false, 00:13:32.623 "data_offset": 0, 00:13:32.623 "data_size": 63488 00:13:32.623 }, 00:13:32.623 { 00:13:32.623 "name": "BaseBdev2", 00:13:32.623 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:32.623 "is_configured": true, 00:13:32.623 "data_offset": 2048, 00:13:32.623 "data_size": 63488 00:13:32.623 } 00:13:32.623 ] 00:13:32.623 }' 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.623 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.193 "name": "raid_bdev1", 00:13:33.193 "uuid": "d740ce7b-117d-4a61-9e9f-14ac534413a2", 00:13:33.193 "strip_size_kb": 0, 00:13:33.193 "state": "online", 00:13:33.193 "raid_level": "raid1", 00:13:33.193 "superblock": true, 00:13:33.193 "num_base_bdevs": 2, 00:13:33.193 "num_base_bdevs_discovered": 1, 00:13:33.193 "num_base_bdevs_operational": 1, 00:13:33.193 "base_bdevs_list": [ 00:13:33.193 { 00:13:33.193 "name": null, 00:13:33.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.193 "is_configured": false, 00:13:33.193 "data_offset": 0, 00:13:33.193 "data_size": 63488 00:13:33.193 }, 00:13:33.193 { 00:13:33.193 "name": "BaseBdev2", 00:13:33.193 "uuid": "109eadf5-214b-5f3a-86f7-90e5069d07c3", 00:13:33.193 "is_configured": true, 00:13:33.193 "data_offset": 2048, 00:13:33.193 "data_size": 63488 00:13:33.193 } 00:13:33.193 ] 00:13:33.193 }' 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78611 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78611 ']' 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78611 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78611 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:33.193 killing process with pid 78611 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78611' 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78611 00:13:33.193 Received shutdown signal, test time was about 16.853200 seconds 00:13:33.193 00:13:33.193 Latency(us) 00:13:33.193 [2024-12-14T12:39:32.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.193 [2024-12-14T12:39:32.931Z] =================================================================================================================== 00:13:33.193 [2024-12-14T12:39:32.931Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:33.193 [2024-12-14 12:39:32.906239] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:33.193 [2024-12-14 12:39:32.906372] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.193 12:39:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78611 00:13:33.193 [2024-12-14 12:39:32.906438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.193 [2024-12-14 12:39:32.906450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:33.453 [2024-12-14 12:39:33.129278] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:34.834 12:39:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:34.834 00:13:34.834 real 0m20.061s 00:13:34.834 user 0m26.358s 00:13:34.834 sys 0m2.074s 00:13:34.834 12:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.834 12:39:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.834 ************************************ 00:13:34.834 END TEST raid_rebuild_test_sb_io 00:13:34.834 ************************************ 00:13:34.834 12:39:34 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:34.834 12:39:34 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:34.834 12:39:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:34.834 12:39:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.834 12:39:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:34.834 ************************************ 00:13:34.834 START TEST raid_rebuild_test 00:13:34.834 ************************************ 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=79295 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 79295 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 79295 ']' 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:34.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:34.835 12:39:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:34.835 [2024-12-14 12:39:34.431344] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:34.835 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:34.835 Zero copy mechanism will not be used. 00:13:34.835 [2024-12-14 12:39:34.432102] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79295 ] 00:13:35.095 [2024-12-14 12:39:34.618330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:35.095 [2024-12-14 12:39:34.732391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.355 [2024-12-14 12:39:34.925048] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.355 [2024-12-14 12:39:34.925096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.613 BaseBdev1_malloc 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.613 [2024-12-14 12:39:35.290284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.613 [2024-12-14 12:39:35.290347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.613 [2024-12-14 12:39:35.290371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:35.613 [2024-12-14 12:39:35.290382] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.613 [2024-12-14 12:39:35.292439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.613 [2024-12-14 12:39:35.292474] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.613 BaseBdev1 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.613 BaseBdev2_malloc 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.613 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.613 [2024-12-14 12:39:35.340855] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:35.613 [2024-12-14 12:39:35.340913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.613 [2024-12-14 12:39:35.340933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:35.613 [2024-12-14 12:39:35.340945] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.614 [2024-12-14 12:39:35.343029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.614 [2024-12-14 12:39:35.343070] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:35.614 BaseBdev2 00:13:35.614 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.614 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.614 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:35.614 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.614 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.873 BaseBdev3_malloc 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.873 [2024-12-14 12:39:35.408079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:35.873 [2024-12-14 12:39:35.408123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.873 [2024-12-14 12:39:35.408144] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:35.873 [2024-12-14 12:39:35.408155] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.873 [2024-12-14 12:39:35.410178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.873 [2024-12-14 12:39:35.410214] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:35.873 BaseBdev3 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.873 BaseBdev4_malloc 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.873 [2024-12-14 12:39:35.457668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:35.873 [2024-12-14 12:39:35.457735] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.873 [2024-12-14 12:39:35.457755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:35.873 [2024-12-14 12:39:35.457764] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.873 [2024-12-14 12:39:35.459771] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.873 [2024-12-14 12:39:35.459809] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:35.873 BaseBdev4 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.873 spare_malloc 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.873 spare_delay 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.873 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.873 [2024-12-14 12:39:35.514936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.873 [2024-12-14 12:39:35.514984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.873 [2024-12-14 12:39:35.515000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:35.873 [2024-12-14 12:39:35.515011] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.873 [2024-12-14 12:39:35.517265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.873 [2024-12-14 12:39:35.517312] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.873 spare 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.874 [2024-12-14 12:39:35.522969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:35.874 [2024-12-14 12:39:35.524782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.874 [2024-12-14 12:39:35.524845] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.874 [2024-12-14 12:39:35.524893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.874 [2024-12-14 12:39:35.524994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:35.874 [2024-12-14 12:39:35.525010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:35.874 [2024-12-14 12:39:35.525252] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:35.874 [2024-12-14 12:39:35.525426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:35.874 [2024-12-14 12:39:35.525445] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:35.874 [2024-12-14 12:39:35.525583] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.874 "name": "raid_bdev1", 00:13:35.874 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:35.874 "strip_size_kb": 0, 00:13:35.874 "state": "online", 00:13:35.874 "raid_level": "raid1", 00:13:35.874 "superblock": false, 00:13:35.874 "num_base_bdevs": 4, 00:13:35.874 "num_base_bdevs_discovered": 4, 00:13:35.874 "num_base_bdevs_operational": 4, 00:13:35.874 "base_bdevs_list": [ 00:13:35.874 { 00:13:35.874 "name": "BaseBdev1", 00:13:35.874 "uuid": "c8a50807-1590-5905-9e87-305b5377f367", 00:13:35.874 "is_configured": true, 00:13:35.874 "data_offset": 0, 00:13:35.874 "data_size": 65536 00:13:35.874 }, 00:13:35.874 { 00:13:35.874 "name": "BaseBdev2", 00:13:35.874 "uuid": "b0f99fbe-cbbc-51f3-adfa-18019baf9c5c", 00:13:35.874 "is_configured": true, 00:13:35.874 "data_offset": 0, 00:13:35.874 "data_size": 65536 00:13:35.874 }, 00:13:35.874 { 00:13:35.874 "name": "BaseBdev3", 00:13:35.874 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:35.874 "is_configured": true, 00:13:35.874 "data_offset": 0, 00:13:35.874 "data_size": 65536 00:13:35.874 }, 00:13:35.874 { 00:13:35.874 "name": "BaseBdev4", 00:13:35.874 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:35.874 "is_configured": true, 00:13:35.874 "data_offset": 0, 00:13:35.874 "data_size": 65536 00:13:35.874 } 00:13:35.874 ] 00:13:35.874 }' 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.874 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.443 [2024-12-14 12:39:35.950586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:36.443 12:39:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.443 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:36.703 [2024-12-14 12:39:36.213831] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:36.703 /dev/nbd0 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:36.703 1+0 records in 00:13:36.703 1+0 records out 00:13:36.703 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034145 s, 12.0 MB/s 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:36.703 12:39:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:41.983 65536+0 records in 00:13:41.983 65536+0 records out 00:13:41.983 33554432 bytes (34 MB, 32 MiB) copied, 5.26476 s, 6.4 MB/s 00:13:41.983 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:41.983 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:41.983 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:41.983 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:41.983 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:41.983 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:41.983 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:42.243 [2024-12-14 12:39:41.739911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.243 [2024-12-14 12:39:41.767942] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.243 "name": "raid_bdev1", 00:13:42.243 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:42.243 "strip_size_kb": 0, 00:13:42.243 "state": "online", 00:13:42.243 "raid_level": "raid1", 00:13:42.243 "superblock": false, 00:13:42.243 "num_base_bdevs": 4, 00:13:42.243 "num_base_bdevs_discovered": 3, 00:13:42.243 "num_base_bdevs_operational": 3, 00:13:42.243 "base_bdevs_list": [ 00:13:42.243 { 00:13:42.243 "name": null, 00:13:42.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.243 "is_configured": false, 00:13:42.243 "data_offset": 0, 00:13:42.243 "data_size": 65536 00:13:42.243 }, 00:13:42.243 { 00:13:42.243 "name": "BaseBdev2", 00:13:42.243 "uuid": "b0f99fbe-cbbc-51f3-adfa-18019baf9c5c", 00:13:42.243 "is_configured": true, 00:13:42.243 "data_offset": 0, 00:13:42.243 "data_size": 65536 00:13:42.243 }, 00:13:42.243 { 00:13:42.243 "name": "BaseBdev3", 00:13:42.243 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:42.243 "is_configured": true, 00:13:42.243 "data_offset": 0, 00:13:42.243 "data_size": 65536 00:13:42.243 }, 00:13:42.243 { 00:13:42.243 "name": "BaseBdev4", 00:13:42.243 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:42.243 "is_configured": true, 00:13:42.243 "data_offset": 0, 00:13:42.243 "data_size": 65536 00:13:42.243 } 00:13:42.243 ] 00:13:42.243 }' 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.243 12:39:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.503 12:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.503 12:39:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.503 12:39:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.503 [2024-12-14 12:39:42.223173] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.503 [2024-12-14 12:39:42.237176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:42.503 12:39:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.503 12:39:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:42.503 [2024-12-14 12:39:42.239165] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.883 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:43.883 "name": "raid_bdev1", 00:13:43.883 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:43.883 "strip_size_kb": 0, 00:13:43.883 "state": "online", 00:13:43.883 "raid_level": "raid1", 00:13:43.883 "superblock": false, 00:13:43.883 "num_base_bdevs": 4, 00:13:43.884 "num_base_bdevs_discovered": 4, 00:13:43.884 "num_base_bdevs_operational": 4, 00:13:43.884 "process": { 00:13:43.884 "type": "rebuild", 00:13:43.884 "target": "spare", 00:13:43.884 "progress": { 00:13:43.884 "blocks": 20480, 00:13:43.884 "percent": 31 00:13:43.884 } 00:13:43.884 }, 00:13:43.884 "base_bdevs_list": [ 00:13:43.884 { 00:13:43.884 "name": "spare", 00:13:43.884 "uuid": "20681003-251f-5893-9e4a-a0aefcfde0d6", 00:13:43.884 "is_configured": true, 00:13:43.884 "data_offset": 0, 00:13:43.884 "data_size": 65536 00:13:43.884 }, 00:13:43.884 { 00:13:43.884 "name": "BaseBdev2", 00:13:43.884 "uuid": "b0f99fbe-cbbc-51f3-adfa-18019baf9c5c", 00:13:43.884 "is_configured": true, 00:13:43.884 "data_offset": 0, 00:13:43.884 "data_size": 65536 00:13:43.884 }, 00:13:43.884 { 00:13:43.884 "name": "BaseBdev3", 00:13:43.884 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:43.884 "is_configured": true, 00:13:43.884 "data_offset": 0, 00:13:43.884 "data_size": 65536 00:13:43.884 }, 00:13:43.884 { 00:13:43.884 "name": "BaseBdev4", 00:13:43.884 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:43.884 "is_configured": true, 00:13:43.884 "data_offset": 0, 00:13:43.884 "data_size": 65536 00:13:43.884 } 00:13:43.884 ] 00:13:43.884 }' 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.884 [2024-12-14 12:39:43.378504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.884 [2024-12-14 12:39:43.444556] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:43.884 [2024-12-14 12:39:43.444640] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.884 [2024-12-14 12:39:43.444656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:43.884 [2024-12-14 12:39:43.444665] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.884 "name": "raid_bdev1", 00:13:43.884 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:43.884 "strip_size_kb": 0, 00:13:43.884 "state": "online", 00:13:43.884 "raid_level": "raid1", 00:13:43.884 "superblock": false, 00:13:43.884 "num_base_bdevs": 4, 00:13:43.884 "num_base_bdevs_discovered": 3, 00:13:43.884 "num_base_bdevs_operational": 3, 00:13:43.884 "base_bdevs_list": [ 00:13:43.884 { 00:13:43.884 "name": null, 00:13:43.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.884 "is_configured": false, 00:13:43.884 "data_offset": 0, 00:13:43.884 "data_size": 65536 00:13:43.884 }, 00:13:43.884 { 00:13:43.884 "name": "BaseBdev2", 00:13:43.884 "uuid": "b0f99fbe-cbbc-51f3-adfa-18019baf9c5c", 00:13:43.884 "is_configured": true, 00:13:43.884 "data_offset": 0, 00:13:43.884 "data_size": 65536 00:13:43.884 }, 00:13:43.884 { 00:13:43.884 "name": "BaseBdev3", 00:13:43.884 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:43.884 "is_configured": true, 00:13:43.884 "data_offset": 0, 00:13:43.884 "data_size": 65536 00:13:43.884 }, 00:13:43.884 { 00:13:43.884 "name": "BaseBdev4", 00:13:43.884 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:43.884 "is_configured": true, 00:13:43.884 "data_offset": 0, 00:13:43.884 "data_size": 65536 00:13:43.884 } 00:13:43.884 ] 00:13:43.884 }' 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.884 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.143 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.402 12:39:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.402 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:44.402 "name": "raid_bdev1", 00:13:44.402 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:44.402 "strip_size_kb": 0, 00:13:44.402 "state": "online", 00:13:44.402 "raid_level": "raid1", 00:13:44.402 "superblock": false, 00:13:44.402 "num_base_bdevs": 4, 00:13:44.402 "num_base_bdevs_discovered": 3, 00:13:44.402 "num_base_bdevs_operational": 3, 00:13:44.402 "base_bdevs_list": [ 00:13:44.402 { 00:13:44.402 "name": null, 00:13:44.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:44.402 "is_configured": false, 00:13:44.402 "data_offset": 0, 00:13:44.402 "data_size": 65536 00:13:44.402 }, 00:13:44.402 { 00:13:44.402 "name": "BaseBdev2", 00:13:44.402 "uuid": "b0f99fbe-cbbc-51f3-adfa-18019baf9c5c", 00:13:44.402 "is_configured": true, 00:13:44.402 "data_offset": 0, 00:13:44.402 "data_size": 65536 00:13:44.402 }, 00:13:44.402 { 00:13:44.402 "name": "BaseBdev3", 00:13:44.402 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:44.402 "is_configured": true, 00:13:44.402 "data_offset": 0, 00:13:44.402 "data_size": 65536 00:13:44.402 }, 00:13:44.402 { 00:13:44.402 "name": "BaseBdev4", 00:13:44.402 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:44.402 "is_configured": true, 00:13:44.402 "data_offset": 0, 00:13:44.402 "data_size": 65536 00:13:44.402 } 00:13:44.402 ] 00:13:44.402 }' 00:13:44.402 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:44.402 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:44.402 12:39:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:44.402 12:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:44.402 12:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:44.402 12:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.402 12:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.402 [2024-12-14 12:39:44.029125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:44.402 [2024-12-14 12:39:44.043307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:44.402 12:39:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.402 12:39:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:44.403 [2024-12-14 12:39:44.045161] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:45.340 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.341 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.599 "name": "raid_bdev1", 00:13:45.599 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:45.599 "strip_size_kb": 0, 00:13:45.599 "state": "online", 00:13:45.599 "raid_level": "raid1", 00:13:45.599 "superblock": false, 00:13:45.599 "num_base_bdevs": 4, 00:13:45.599 "num_base_bdevs_discovered": 4, 00:13:45.599 "num_base_bdevs_operational": 4, 00:13:45.599 "process": { 00:13:45.599 "type": "rebuild", 00:13:45.599 "target": "spare", 00:13:45.599 "progress": { 00:13:45.599 "blocks": 20480, 00:13:45.599 "percent": 31 00:13:45.599 } 00:13:45.599 }, 00:13:45.599 "base_bdevs_list": [ 00:13:45.599 { 00:13:45.599 "name": "spare", 00:13:45.599 "uuid": "20681003-251f-5893-9e4a-a0aefcfde0d6", 00:13:45.599 "is_configured": true, 00:13:45.599 "data_offset": 0, 00:13:45.599 "data_size": 65536 00:13:45.599 }, 00:13:45.599 { 00:13:45.599 "name": "BaseBdev2", 00:13:45.599 "uuid": "b0f99fbe-cbbc-51f3-adfa-18019baf9c5c", 00:13:45.599 "is_configured": true, 00:13:45.599 "data_offset": 0, 00:13:45.599 "data_size": 65536 00:13:45.599 }, 00:13:45.599 { 00:13:45.599 "name": "BaseBdev3", 00:13:45.599 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:45.599 "is_configured": true, 00:13:45.599 "data_offset": 0, 00:13:45.599 "data_size": 65536 00:13:45.599 }, 00:13:45.599 { 00:13:45.599 "name": "BaseBdev4", 00:13:45.599 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:45.599 "is_configured": true, 00:13:45.599 "data_offset": 0, 00:13:45.599 "data_size": 65536 00:13:45.599 } 00:13:45.599 ] 00:13:45.599 }' 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.599 [2024-12-14 12:39:45.196573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:45.599 [2024-12-14 12:39:45.250137] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.599 "name": "raid_bdev1", 00:13:45.599 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:45.599 "strip_size_kb": 0, 00:13:45.599 "state": "online", 00:13:45.599 "raid_level": "raid1", 00:13:45.599 "superblock": false, 00:13:45.599 "num_base_bdevs": 4, 00:13:45.599 "num_base_bdevs_discovered": 3, 00:13:45.599 "num_base_bdevs_operational": 3, 00:13:45.599 "process": { 00:13:45.599 "type": "rebuild", 00:13:45.599 "target": "spare", 00:13:45.599 "progress": { 00:13:45.599 "blocks": 24576, 00:13:45.599 "percent": 37 00:13:45.599 } 00:13:45.599 }, 00:13:45.599 "base_bdevs_list": [ 00:13:45.599 { 00:13:45.599 "name": "spare", 00:13:45.599 "uuid": "20681003-251f-5893-9e4a-a0aefcfde0d6", 00:13:45.599 "is_configured": true, 00:13:45.599 "data_offset": 0, 00:13:45.599 "data_size": 65536 00:13:45.599 }, 00:13:45.599 { 00:13:45.599 "name": null, 00:13:45.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.599 "is_configured": false, 00:13:45.599 "data_offset": 0, 00:13:45.599 "data_size": 65536 00:13:45.599 }, 00:13:45.599 { 00:13:45.599 "name": "BaseBdev3", 00:13:45.599 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:45.599 "is_configured": true, 00:13:45.599 "data_offset": 0, 00:13:45.599 "data_size": 65536 00:13:45.599 }, 00:13:45.599 { 00:13:45.599 "name": "BaseBdev4", 00:13:45.599 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:45.599 "is_configured": true, 00:13:45.599 "data_offset": 0, 00:13:45.599 "data_size": 65536 00:13:45.599 } 00:13:45.599 ] 00:13:45.599 }' 00:13:45.599 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=440 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:45.858 "name": "raid_bdev1", 00:13:45.858 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:45.858 "strip_size_kb": 0, 00:13:45.858 "state": "online", 00:13:45.858 "raid_level": "raid1", 00:13:45.858 "superblock": false, 00:13:45.858 "num_base_bdevs": 4, 00:13:45.858 "num_base_bdevs_discovered": 3, 00:13:45.858 "num_base_bdevs_operational": 3, 00:13:45.858 "process": { 00:13:45.858 "type": "rebuild", 00:13:45.858 "target": "spare", 00:13:45.858 "progress": { 00:13:45.858 "blocks": 26624, 00:13:45.858 "percent": 40 00:13:45.858 } 00:13:45.858 }, 00:13:45.858 "base_bdevs_list": [ 00:13:45.858 { 00:13:45.858 "name": "spare", 00:13:45.858 "uuid": "20681003-251f-5893-9e4a-a0aefcfde0d6", 00:13:45.858 "is_configured": true, 00:13:45.858 "data_offset": 0, 00:13:45.858 "data_size": 65536 00:13:45.858 }, 00:13:45.858 { 00:13:45.858 "name": null, 00:13:45.858 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:45.858 "is_configured": false, 00:13:45.858 "data_offset": 0, 00:13:45.858 "data_size": 65536 00:13:45.858 }, 00:13:45.858 { 00:13:45.858 "name": "BaseBdev3", 00:13:45.858 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:45.858 "is_configured": true, 00:13:45.858 "data_offset": 0, 00:13:45.858 "data_size": 65536 00:13:45.858 }, 00:13:45.858 { 00:13:45.858 "name": "BaseBdev4", 00:13:45.858 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:45.858 "is_configured": true, 00:13:45.858 "data_offset": 0, 00:13:45.858 "data_size": 65536 00:13:45.858 } 00:13:45.858 ] 00:13:45.858 }' 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:45.858 12:39:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.795 12:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.054 12:39:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.054 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.054 "name": "raid_bdev1", 00:13:47.054 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:47.054 "strip_size_kb": 0, 00:13:47.054 "state": "online", 00:13:47.054 "raid_level": "raid1", 00:13:47.054 "superblock": false, 00:13:47.054 "num_base_bdevs": 4, 00:13:47.054 "num_base_bdevs_discovered": 3, 00:13:47.054 "num_base_bdevs_operational": 3, 00:13:47.054 "process": { 00:13:47.054 "type": "rebuild", 00:13:47.054 "target": "spare", 00:13:47.054 "progress": { 00:13:47.054 "blocks": 49152, 00:13:47.054 "percent": 75 00:13:47.054 } 00:13:47.054 }, 00:13:47.054 "base_bdevs_list": [ 00:13:47.054 { 00:13:47.054 "name": "spare", 00:13:47.054 "uuid": "20681003-251f-5893-9e4a-a0aefcfde0d6", 00:13:47.054 "is_configured": true, 00:13:47.054 "data_offset": 0, 00:13:47.054 "data_size": 65536 00:13:47.054 }, 00:13:47.054 { 00:13:47.054 "name": null, 00:13:47.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.054 "is_configured": false, 00:13:47.054 "data_offset": 0, 00:13:47.054 "data_size": 65536 00:13:47.054 }, 00:13:47.054 { 00:13:47.054 "name": "BaseBdev3", 00:13:47.054 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:47.054 "is_configured": true, 00:13:47.054 "data_offset": 0, 00:13:47.054 "data_size": 65536 00:13:47.054 }, 00:13:47.054 { 00:13:47.054 "name": "BaseBdev4", 00:13:47.054 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:47.054 "is_configured": true, 00:13:47.054 "data_offset": 0, 00:13:47.054 "data_size": 65536 00:13:47.054 } 00:13:47.054 ] 00:13:47.054 }' 00:13:47.054 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.054 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.054 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.054 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.054 12:39:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:47.622 [2024-12-14 12:39:47.258373] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:47.622 [2024-12-14 12:39:47.258495] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:47.622 [2024-12-14 12:39:47.258541] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.192 "name": "raid_bdev1", 00:13:48.192 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:48.192 "strip_size_kb": 0, 00:13:48.192 "state": "online", 00:13:48.192 "raid_level": "raid1", 00:13:48.192 "superblock": false, 00:13:48.192 "num_base_bdevs": 4, 00:13:48.192 "num_base_bdevs_discovered": 3, 00:13:48.192 "num_base_bdevs_operational": 3, 00:13:48.192 "base_bdevs_list": [ 00:13:48.192 { 00:13:48.192 "name": "spare", 00:13:48.192 "uuid": "20681003-251f-5893-9e4a-a0aefcfde0d6", 00:13:48.192 "is_configured": true, 00:13:48.192 "data_offset": 0, 00:13:48.192 "data_size": 65536 00:13:48.192 }, 00:13:48.192 { 00:13:48.192 "name": null, 00:13:48.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.192 "is_configured": false, 00:13:48.192 "data_offset": 0, 00:13:48.192 "data_size": 65536 00:13:48.192 }, 00:13:48.192 { 00:13:48.192 "name": "BaseBdev3", 00:13:48.192 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:48.192 "is_configured": true, 00:13:48.192 "data_offset": 0, 00:13:48.192 "data_size": 65536 00:13:48.192 }, 00:13:48.192 { 00:13:48.192 "name": "BaseBdev4", 00:13:48.192 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:48.192 "is_configured": true, 00:13:48.192 "data_offset": 0, 00:13:48.192 "data_size": 65536 00:13:48.192 } 00:13:48.192 ] 00:13:48.192 }' 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.192 "name": "raid_bdev1", 00:13:48.192 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:48.192 "strip_size_kb": 0, 00:13:48.192 "state": "online", 00:13:48.192 "raid_level": "raid1", 00:13:48.192 "superblock": false, 00:13:48.192 "num_base_bdevs": 4, 00:13:48.192 "num_base_bdevs_discovered": 3, 00:13:48.192 "num_base_bdevs_operational": 3, 00:13:48.192 "base_bdevs_list": [ 00:13:48.192 { 00:13:48.192 "name": "spare", 00:13:48.192 "uuid": "20681003-251f-5893-9e4a-a0aefcfde0d6", 00:13:48.192 "is_configured": true, 00:13:48.192 "data_offset": 0, 00:13:48.192 "data_size": 65536 00:13:48.192 }, 00:13:48.192 { 00:13:48.192 "name": null, 00:13:48.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.192 "is_configured": false, 00:13:48.192 "data_offset": 0, 00:13:48.192 "data_size": 65536 00:13:48.192 }, 00:13:48.192 { 00:13:48.192 "name": "BaseBdev3", 00:13:48.192 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:48.192 "is_configured": true, 00:13:48.192 "data_offset": 0, 00:13:48.192 "data_size": 65536 00:13:48.192 }, 00:13:48.192 { 00:13:48.192 "name": "BaseBdev4", 00:13:48.192 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:48.192 "is_configured": true, 00:13:48.192 "data_offset": 0, 00:13:48.192 "data_size": 65536 00:13:48.192 } 00:13:48.192 ] 00:13:48.192 }' 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.192 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.452 12:39:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.452 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.452 "name": "raid_bdev1", 00:13:48.452 "uuid": "cdbc54dd-6032-4da8-843a-506042c1c69c", 00:13:48.452 "strip_size_kb": 0, 00:13:48.452 "state": "online", 00:13:48.452 "raid_level": "raid1", 00:13:48.452 "superblock": false, 00:13:48.452 "num_base_bdevs": 4, 00:13:48.452 "num_base_bdevs_discovered": 3, 00:13:48.452 "num_base_bdevs_operational": 3, 00:13:48.452 "base_bdevs_list": [ 00:13:48.452 { 00:13:48.452 "name": "spare", 00:13:48.452 "uuid": "20681003-251f-5893-9e4a-a0aefcfde0d6", 00:13:48.452 "is_configured": true, 00:13:48.452 "data_offset": 0, 00:13:48.452 "data_size": 65536 00:13:48.452 }, 00:13:48.452 { 00:13:48.452 "name": null, 00:13:48.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.452 "is_configured": false, 00:13:48.452 "data_offset": 0, 00:13:48.452 "data_size": 65536 00:13:48.452 }, 00:13:48.452 { 00:13:48.452 "name": "BaseBdev3", 00:13:48.452 "uuid": "857ab98d-bd57-5fa8-8f41-e0f351e9fbb3", 00:13:48.452 "is_configured": true, 00:13:48.452 "data_offset": 0, 00:13:48.452 "data_size": 65536 00:13:48.452 }, 00:13:48.452 { 00:13:48.452 "name": "BaseBdev4", 00:13:48.452 "uuid": "8d75b46d-89a7-5af6-ade7-48a05658bbd6", 00:13:48.452 "is_configured": true, 00:13:48.452 "data_offset": 0, 00:13:48.452 "data_size": 65536 00:13:48.452 } 00:13:48.452 ] 00:13:48.452 }' 00:13:48.452 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.452 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.712 [2024-12-14 12:39:48.375032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:48.712 [2024-12-14 12:39:48.375080] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:48.712 [2024-12-14 12:39:48.375169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:48.712 [2024-12-14 12:39:48.375254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:48.712 [2024-12-14 12:39:48.375264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.712 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:48.971 /dev/nbd0 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:48.971 1+0 records in 00:13:48.971 1+0 records out 00:13:48.971 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000200556 s, 20.4 MB/s 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.971 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:48.972 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:48.972 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:48.972 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:48.972 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:48.972 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:48.972 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:49.231 /dev/nbd1 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:49.231 1+0 records in 00:13:49.231 1+0 records out 00:13:49.231 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273457 s, 15.0 MB/s 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:49.231 12:39:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:49.491 12:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:49.491 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:49.491 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:49.491 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:49.491 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:49.491 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.491 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:49.751 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 79295 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 79295 ']' 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 79295 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79295 00:13:50.010 killing process with pid 79295 00:13:50.010 Received shutdown signal, test time was about 60.000000 seconds 00:13:50.010 00:13:50.010 Latency(us) 00:13:50.010 [2024-12-14T12:39:49.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.010 [2024-12-14T12:39:49.748Z] =================================================================================================================== 00:13:50.010 [2024-12-14T12:39:49.748Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79295' 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 79295 00:13:50.010 [2024-12-14 12:39:49.589504] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:50.010 12:39:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 79295 00:13:50.584 [2024-12-14 12:39:50.094064] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:51.536 12:39:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:51.536 00:13:51.536 real 0m16.867s 00:13:51.537 user 0m19.018s 00:13:51.537 sys 0m2.839s 00:13:51.537 12:39:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:51.537 ************************************ 00:13:51.537 END TEST raid_rebuild_test 00:13:51.537 ************************************ 00:13:51.537 12:39:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.537 12:39:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:51.537 12:39:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:51.537 12:39:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:51.537 12:39:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:51.537 ************************************ 00:13:51.537 START TEST raid_rebuild_test_sb 00:13:51.537 ************************************ 00:13:51.537 12:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:51.537 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:51.537 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:51.537 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:51.537 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:51.537 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=79736 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 79736 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 79736 ']' 00:13:51.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:51.795 12:39:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:51.795 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:51.795 Zero copy mechanism will not be used. 00:13:51.795 [2024-12-14 12:39:51.370875] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:51.795 [2024-12-14 12:39:51.370989] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79736 ] 00:13:52.054 [2024-12-14 12:39:51.541177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.054 [2024-12-14 12:39:51.660801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.313 [2024-12-14 12:39:51.880309] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.313 [2024-12-14 12:39:51.880464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.572 BaseBdev1_malloc 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.572 [2024-12-14 12:39:52.256000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:52.572 [2024-12-14 12:39:52.256087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.572 [2024-12-14 12:39:52.256110] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:52.572 [2024-12-14 12:39:52.256121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.572 [2024-12-14 12:39:52.258212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.572 [2024-12-14 12:39:52.258250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:52.572 BaseBdev1 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.572 BaseBdev2_malloc 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.572 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.831 [2024-12-14 12:39:52.311730] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:52.831 [2024-12-14 12:39:52.311804] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.831 [2024-12-14 12:39:52.311825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:52.831 [2024-12-14 12:39:52.311835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.831 [2024-12-14 12:39:52.313866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.831 [2024-12-14 12:39:52.313902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:52.831 BaseBdev2 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.831 BaseBdev3_malloc 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.831 [2024-12-14 12:39:52.381221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:52.831 [2024-12-14 12:39:52.381338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.831 [2024-12-14 12:39:52.381367] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:52.831 [2024-12-14 12:39:52.381380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.831 [2024-12-14 12:39:52.383562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.831 [2024-12-14 12:39:52.383601] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:52.831 BaseBdev3 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.831 BaseBdev4_malloc 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.831 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.832 [2024-12-14 12:39:52.436492] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:52.832 [2024-12-14 12:39:52.436551] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.832 [2024-12-14 12:39:52.436573] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:52.832 [2024-12-14 12:39:52.436583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.832 [2024-12-14 12:39:52.438568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.832 [2024-12-14 12:39:52.438608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:52.832 BaseBdev4 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.832 spare_malloc 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.832 spare_delay 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.832 [2024-12-14 12:39:52.502789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:52.832 [2024-12-14 12:39:52.502893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.832 [2024-12-14 12:39:52.502929] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:52.832 [2024-12-14 12:39:52.502960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.832 [2024-12-14 12:39:52.505107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.832 [2024-12-14 12:39:52.505177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:52.832 spare 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.832 [2024-12-14 12:39:52.514830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:52.832 [2024-12-14 12:39:52.516685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:52.832 [2024-12-14 12:39:52.516746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:52.832 [2024-12-14 12:39:52.516804] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:52.832 [2024-12-14 12:39:52.516986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:52.832 [2024-12-14 12:39:52.517001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.832 [2024-12-14 12:39:52.517279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:52.832 [2024-12-14 12:39:52.517453] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:52.832 [2024-12-14 12:39:52.517464] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:52.832 [2024-12-14 12:39:52.517603] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:52.832 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.091 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.091 "name": "raid_bdev1", 00:13:53.091 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:13:53.091 "strip_size_kb": 0, 00:13:53.091 "state": "online", 00:13:53.091 "raid_level": "raid1", 00:13:53.091 "superblock": true, 00:13:53.091 "num_base_bdevs": 4, 00:13:53.091 "num_base_bdevs_discovered": 4, 00:13:53.091 "num_base_bdevs_operational": 4, 00:13:53.091 "base_bdevs_list": [ 00:13:53.091 { 00:13:53.091 "name": "BaseBdev1", 00:13:53.091 "uuid": "31309f8b-ad27-544b-95b5-c864baf453e2", 00:13:53.091 "is_configured": true, 00:13:53.091 "data_offset": 2048, 00:13:53.091 "data_size": 63488 00:13:53.091 }, 00:13:53.091 { 00:13:53.091 "name": "BaseBdev2", 00:13:53.091 "uuid": "184189e8-e26a-55b0-85da-57b098070c71", 00:13:53.091 "is_configured": true, 00:13:53.091 "data_offset": 2048, 00:13:53.091 "data_size": 63488 00:13:53.091 }, 00:13:53.091 { 00:13:53.091 "name": "BaseBdev3", 00:13:53.091 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:13:53.091 "is_configured": true, 00:13:53.091 "data_offset": 2048, 00:13:53.091 "data_size": 63488 00:13:53.091 }, 00:13:53.091 { 00:13:53.091 "name": "BaseBdev4", 00:13:53.091 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:13:53.091 "is_configured": true, 00:13:53.091 "data_offset": 2048, 00:13:53.091 "data_size": 63488 00:13:53.091 } 00:13:53.091 ] 00:13:53.091 }' 00:13:53.091 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.091 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.350 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:53.350 12:39:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:53.350 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.350 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.350 [2024-12-14 12:39:52.982401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:53.350 12:39:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.350 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:53.350 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.351 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:53.610 [2024-12-14 12:39:53.237654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:53.610 /dev/nbd0 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:53.610 1+0 records in 00:13:53.610 1+0 records out 00:13:53.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000466329 s, 8.8 MB/s 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:53.610 12:39:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:00.178 63488+0 records in 00:14:00.178 63488+0 records out 00:14:00.178 32505856 bytes (33 MB, 31 MiB) copied, 5.43032 s, 6.0 MB/s 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:00.178 [2024-12-14 12:39:58.952746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.178 [2024-12-14 12:39:58.968838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.178 12:39:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.178 12:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.178 "name": "raid_bdev1", 00:14:00.178 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:00.178 "strip_size_kb": 0, 00:14:00.178 "state": "online", 00:14:00.178 "raid_level": "raid1", 00:14:00.178 "superblock": true, 00:14:00.178 "num_base_bdevs": 4, 00:14:00.178 "num_base_bdevs_discovered": 3, 00:14:00.178 "num_base_bdevs_operational": 3, 00:14:00.178 "base_bdevs_list": [ 00:14:00.178 { 00:14:00.178 "name": null, 00:14:00.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.178 "is_configured": false, 00:14:00.178 "data_offset": 0, 00:14:00.178 "data_size": 63488 00:14:00.178 }, 00:14:00.178 { 00:14:00.178 "name": "BaseBdev2", 00:14:00.178 "uuid": "184189e8-e26a-55b0-85da-57b098070c71", 00:14:00.178 "is_configured": true, 00:14:00.178 "data_offset": 2048, 00:14:00.178 "data_size": 63488 00:14:00.178 }, 00:14:00.178 { 00:14:00.178 "name": "BaseBdev3", 00:14:00.178 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:00.178 "is_configured": true, 00:14:00.178 "data_offset": 2048, 00:14:00.178 "data_size": 63488 00:14:00.178 }, 00:14:00.178 { 00:14:00.178 "name": "BaseBdev4", 00:14:00.178 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:00.178 "is_configured": true, 00:14:00.178 "data_offset": 2048, 00:14:00.178 "data_size": 63488 00:14:00.178 } 00:14:00.178 ] 00:14:00.178 }' 00:14:00.178 12:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.178 12:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.178 12:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.178 12:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.178 12:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.178 [2024-12-14 12:39:59.420101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.178 [2024-12-14 12:39:59.435177] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:00.179 12:39:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.179 12:39:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:00.179 [2024-12-14 12:39:59.437157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:00.746 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.005 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.005 "name": "raid_bdev1", 00:14:01.005 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:01.005 "strip_size_kb": 0, 00:14:01.005 "state": "online", 00:14:01.005 "raid_level": "raid1", 00:14:01.005 "superblock": true, 00:14:01.005 "num_base_bdevs": 4, 00:14:01.005 "num_base_bdevs_discovered": 4, 00:14:01.005 "num_base_bdevs_operational": 4, 00:14:01.005 "process": { 00:14:01.005 "type": "rebuild", 00:14:01.005 "target": "spare", 00:14:01.005 "progress": { 00:14:01.005 "blocks": 20480, 00:14:01.005 "percent": 32 00:14:01.005 } 00:14:01.005 }, 00:14:01.005 "base_bdevs_list": [ 00:14:01.005 { 00:14:01.005 "name": "spare", 00:14:01.005 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:01.005 "is_configured": true, 00:14:01.005 "data_offset": 2048, 00:14:01.005 "data_size": 63488 00:14:01.005 }, 00:14:01.005 { 00:14:01.005 "name": "BaseBdev2", 00:14:01.005 "uuid": "184189e8-e26a-55b0-85da-57b098070c71", 00:14:01.005 "is_configured": true, 00:14:01.005 "data_offset": 2048, 00:14:01.005 "data_size": 63488 00:14:01.005 }, 00:14:01.005 { 00:14:01.005 "name": "BaseBdev3", 00:14:01.005 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:01.005 "is_configured": true, 00:14:01.005 "data_offset": 2048, 00:14:01.005 "data_size": 63488 00:14:01.005 }, 00:14:01.005 { 00:14:01.005 "name": "BaseBdev4", 00:14:01.005 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:01.005 "is_configured": true, 00:14:01.005 "data_offset": 2048, 00:14:01.005 "data_size": 63488 00:14:01.005 } 00:14:01.005 ] 00:14:01.005 }' 00:14:01.005 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.005 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.005 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.005 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.005 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:01.005 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.005 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.005 [2024-12-14 12:40:00.596407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.006 [2024-12-14 12:40:00.642683] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.006 [2024-12-14 12:40:00.642763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.006 [2024-12-14 12:40:00.642781] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.006 [2024-12-14 12:40:00.642791] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.006 "name": "raid_bdev1", 00:14:01.006 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:01.006 "strip_size_kb": 0, 00:14:01.006 "state": "online", 00:14:01.006 "raid_level": "raid1", 00:14:01.006 "superblock": true, 00:14:01.006 "num_base_bdevs": 4, 00:14:01.006 "num_base_bdevs_discovered": 3, 00:14:01.006 "num_base_bdevs_operational": 3, 00:14:01.006 "base_bdevs_list": [ 00:14:01.006 { 00:14:01.006 "name": null, 00:14:01.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.006 "is_configured": false, 00:14:01.006 "data_offset": 0, 00:14:01.006 "data_size": 63488 00:14:01.006 }, 00:14:01.006 { 00:14:01.006 "name": "BaseBdev2", 00:14:01.006 "uuid": "184189e8-e26a-55b0-85da-57b098070c71", 00:14:01.006 "is_configured": true, 00:14:01.006 "data_offset": 2048, 00:14:01.006 "data_size": 63488 00:14:01.006 }, 00:14:01.006 { 00:14:01.006 "name": "BaseBdev3", 00:14:01.006 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:01.006 "is_configured": true, 00:14:01.006 "data_offset": 2048, 00:14:01.006 "data_size": 63488 00:14:01.006 }, 00:14:01.006 { 00:14:01.006 "name": "BaseBdev4", 00:14:01.006 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:01.006 "is_configured": true, 00:14:01.006 "data_offset": 2048, 00:14:01.006 "data_size": 63488 00:14:01.006 } 00:14:01.006 ] 00:14:01.006 }' 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.006 12:40:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.575 "name": "raid_bdev1", 00:14:01.575 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:01.575 "strip_size_kb": 0, 00:14:01.575 "state": "online", 00:14:01.575 "raid_level": "raid1", 00:14:01.575 "superblock": true, 00:14:01.575 "num_base_bdevs": 4, 00:14:01.575 "num_base_bdevs_discovered": 3, 00:14:01.575 "num_base_bdevs_operational": 3, 00:14:01.575 "base_bdevs_list": [ 00:14:01.575 { 00:14:01.575 "name": null, 00:14:01.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.575 "is_configured": false, 00:14:01.575 "data_offset": 0, 00:14:01.575 "data_size": 63488 00:14:01.575 }, 00:14:01.575 { 00:14:01.575 "name": "BaseBdev2", 00:14:01.575 "uuid": "184189e8-e26a-55b0-85da-57b098070c71", 00:14:01.575 "is_configured": true, 00:14:01.575 "data_offset": 2048, 00:14:01.575 "data_size": 63488 00:14:01.575 }, 00:14:01.575 { 00:14:01.575 "name": "BaseBdev3", 00:14:01.575 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:01.575 "is_configured": true, 00:14:01.575 "data_offset": 2048, 00:14:01.575 "data_size": 63488 00:14:01.575 }, 00:14:01.575 { 00:14:01.575 "name": "BaseBdev4", 00:14:01.575 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:01.575 "is_configured": true, 00:14:01.575 "data_offset": 2048, 00:14:01.575 "data_size": 63488 00:14:01.575 } 00:14:01.575 ] 00:14:01.575 }' 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.575 12:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:01.575 [2024-12-14 12:40:01.271734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.575 [2024-12-14 12:40:01.286282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:01.576 12:40:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.576 12:40:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:01.576 [2024-12-14 12:40:01.288344] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.986 "name": "raid_bdev1", 00:14:02.986 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:02.986 "strip_size_kb": 0, 00:14:02.986 "state": "online", 00:14:02.986 "raid_level": "raid1", 00:14:02.986 "superblock": true, 00:14:02.986 "num_base_bdevs": 4, 00:14:02.986 "num_base_bdevs_discovered": 4, 00:14:02.986 "num_base_bdevs_operational": 4, 00:14:02.986 "process": { 00:14:02.986 "type": "rebuild", 00:14:02.986 "target": "spare", 00:14:02.986 "progress": { 00:14:02.986 "blocks": 20480, 00:14:02.986 "percent": 32 00:14:02.986 } 00:14:02.986 }, 00:14:02.986 "base_bdevs_list": [ 00:14:02.986 { 00:14:02.986 "name": "spare", 00:14:02.986 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:02.986 "is_configured": true, 00:14:02.986 "data_offset": 2048, 00:14:02.986 "data_size": 63488 00:14:02.986 }, 00:14:02.986 { 00:14:02.986 "name": "BaseBdev2", 00:14:02.986 "uuid": "184189e8-e26a-55b0-85da-57b098070c71", 00:14:02.986 "is_configured": true, 00:14:02.986 "data_offset": 2048, 00:14:02.986 "data_size": 63488 00:14:02.986 }, 00:14:02.986 { 00:14:02.986 "name": "BaseBdev3", 00:14:02.986 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:02.986 "is_configured": true, 00:14:02.986 "data_offset": 2048, 00:14:02.986 "data_size": 63488 00:14:02.986 }, 00:14:02.986 { 00:14:02.986 "name": "BaseBdev4", 00:14:02.986 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:02.986 "is_configured": true, 00:14:02.986 "data_offset": 2048, 00:14:02.986 "data_size": 63488 00:14:02.986 } 00:14:02.986 ] 00:14:02.986 }' 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:02.986 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:02.986 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.987 [2024-12-14 12:40:02.427709] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:02.987 [2024-12-14 12:40:02.593804] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.987 "name": "raid_bdev1", 00:14:02.987 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:02.987 "strip_size_kb": 0, 00:14:02.987 "state": "online", 00:14:02.987 "raid_level": "raid1", 00:14:02.987 "superblock": true, 00:14:02.987 "num_base_bdevs": 4, 00:14:02.987 "num_base_bdevs_discovered": 3, 00:14:02.987 "num_base_bdevs_operational": 3, 00:14:02.987 "process": { 00:14:02.987 "type": "rebuild", 00:14:02.987 "target": "spare", 00:14:02.987 "progress": { 00:14:02.987 "blocks": 24576, 00:14:02.987 "percent": 38 00:14:02.987 } 00:14:02.987 }, 00:14:02.987 "base_bdevs_list": [ 00:14:02.987 { 00:14:02.987 "name": "spare", 00:14:02.987 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:02.987 "is_configured": true, 00:14:02.987 "data_offset": 2048, 00:14:02.987 "data_size": 63488 00:14:02.987 }, 00:14:02.987 { 00:14:02.987 "name": null, 00:14:02.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.987 "is_configured": false, 00:14:02.987 "data_offset": 0, 00:14:02.987 "data_size": 63488 00:14:02.987 }, 00:14:02.987 { 00:14:02.987 "name": "BaseBdev3", 00:14:02.987 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:02.987 "is_configured": true, 00:14:02.987 "data_offset": 2048, 00:14:02.987 "data_size": 63488 00:14:02.987 }, 00:14:02.987 { 00:14:02.987 "name": "BaseBdev4", 00:14:02.987 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:02.987 "is_configured": true, 00:14:02.987 "data_offset": 2048, 00:14:02.987 "data_size": 63488 00:14:02.987 } 00:14:02.987 ] 00:14:02.987 }' 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.987 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=457 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.247 "name": "raid_bdev1", 00:14:03.247 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:03.247 "strip_size_kb": 0, 00:14:03.247 "state": "online", 00:14:03.247 "raid_level": "raid1", 00:14:03.247 "superblock": true, 00:14:03.247 "num_base_bdevs": 4, 00:14:03.247 "num_base_bdevs_discovered": 3, 00:14:03.247 "num_base_bdevs_operational": 3, 00:14:03.247 "process": { 00:14:03.247 "type": "rebuild", 00:14:03.247 "target": "spare", 00:14:03.247 "progress": { 00:14:03.247 "blocks": 26624, 00:14:03.247 "percent": 41 00:14:03.247 } 00:14:03.247 }, 00:14:03.247 "base_bdevs_list": [ 00:14:03.247 { 00:14:03.247 "name": "spare", 00:14:03.247 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:03.247 "is_configured": true, 00:14:03.247 "data_offset": 2048, 00:14:03.247 "data_size": 63488 00:14:03.247 }, 00:14:03.247 { 00:14:03.247 "name": null, 00:14:03.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.247 "is_configured": false, 00:14:03.247 "data_offset": 0, 00:14:03.247 "data_size": 63488 00:14:03.247 }, 00:14:03.247 { 00:14:03.247 "name": "BaseBdev3", 00:14:03.247 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:03.247 "is_configured": true, 00:14:03.247 "data_offset": 2048, 00:14:03.247 "data_size": 63488 00:14:03.247 }, 00:14:03.247 { 00:14:03.247 "name": "BaseBdev4", 00:14:03.247 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:03.247 "is_configured": true, 00:14:03.247 "data_offset": 2048, 00:14:03.247 "data_size": 63488 00:14:03.247 } 00:14:03.247 ] 00:14:03.247 }' 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.247 12:40:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.186 12:40:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.444 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.444 "name": "raid_bdev1", 00:14:04.444 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:04.444 "strip_size_kb": 0, 00:14:04.444 "state": "online", 00:14:04.444 "raid_level": "raid1", 00:14:04.444 "superblock": true, 00:14:04.445 "num_base_bdevs": 4, 00:14:04.445 "num_base_bdevs_discovered": 3, 00:14:04.445 "num_base_bdevs_operational": 3, 00:14:04.445 "process": { 00:14:04.445 "type": "rebuild", 00:14:04.445 "target": "spare", 00:14:04.445 "progress": { 00:14:04.445 "blocks": 51200, 00:14:04.445 "percent": 80 00:14:04.445 } 00:14:04.445 }, 00:14:04.445 "base_bdevs_list": [ 00:14:04.445 { 00:14:04.445 "name": "spare", 00:14:04.445 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:04.445 "is_configured": true, 00:14:04.445 "data_offset": 2048, 00:14:04.445 "data_size": 63488 00:14:04.445 }, 00:14:04.445 { 00:14:04.445 "name": null, 00:14:04.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.445 "is_configured": false, 00:14:04.445 "data_offset": 0, 00:14:04.445 "data_size": 63488 00:14:04.445 }, 00:14:04.445 { 00:14:04.445 "name": "BaseBdev3", 00:14:04.445 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:04.445 "is_configured": true, 00:14:04.445 "data_offset": 2048, 00:14:04.445 "data_size": 63488 00:14:04.445 }, 00:14:04.445 { 00:14:04.445 "name": "BaseBdev4", 00:14:04.445 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:04.445 "is_configured": true, 00:14:04.445 "data_offset": 2048, 00:14:04.445 "data_size": 63488 00:14:04.445 } 00:14:04.445 ] 00:14:04.445 }' 00:14:04.445 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.445 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.445 12:40:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.445 12:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.445 12:40:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.012 [2024-12-14 12:40:04.502832] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:05.012 [2024-12-14 12:40:04.502988] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:05.012 [2024-12-14 12:40:04.503147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.581 "name": "raid_bdev1", 00:14:05.581 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:05.581 "strip_size_kb": 0, 00:14:05.581 "state": "online", 00:14:05.581 "raid_level": "raid1", 00:14:05.581 "superblock": true, 00:14:05.581 "num_base_bdevs": 4, 00:14:05.581 "num_base_bdevs_discovered": 3, 00:14:05.581 "num_base_bdevs_operational": 3, 00:14:05.581 "base_bdevs_list": [ 00:14:05.581 { 00:14:05.581 "name": "spare", 00:14:05.581 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:05.581 "is_configured": true, 00:14:05.581 "data_offset": 2048, 00:14:05.581 "data_size": 63488 00:14:05.581 }, 00:14:05.581 { 00:14:05.581 "name": null, 00:14:05.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.581 "is_configured": false, 00:14:05.581 "data_offset": 0, 00:14:05.581 "data_size": 63488 00:14:05.581 }, 00:14:05.581 { 00:14:05.581 "name": "BaseBdev3", 00:14:05.581 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:05.581 "is_configured": true, 00:14:05.581 "data_offset": 2048, 00:14:05.581 "data_size": 63488 00:14:05.581 }, 00:14:05.581 { 00:14:05.581 "name": "BaseBdev4", 00:14:05.581 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:05.581 "is_configured": true, 00:14:05.581 "data_offset": 2048, 00:14:05.581 "data_size": 63488 00:14:05.581 } 00:14:05.581 ] 00:14:05.581 }' 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.581 "name": "raid_bdev1", 00:14:05.581 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:05.581 "strip_size_kb": 0, 00:14:05.581 "state": "online", 00:14:05.581 "raid_level": "raid1", 00:14:05.581 "superblock": true, 00:14:05.581 "num_base_bdevs": 4, 00:14:05.581 "num_base_bdevs_discovered": 3, 00:14:05.581 "num_base_bdevs_operational": 3, 00:14:05.581 "base_bdevs_list": [ 00:14:05.581 { 00:14:05.581 "name": "spare", 00:14:05.581 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:05.581 "is_configured": true, 00:14:05.581 "data_offset": 2048, 00:14:05.581 "data_size": 63488 00:14:05.581 }, 00:14:05.581 { 00:14:05.581 "name": null, 00:14:05.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.581 "is_configured": false, 00:14:05.581 "data_offset": 0, 00:14:05.581 "data_size": 63488 00:14:05.581 }, 00:14:05.581 { 00:14:05.581 "name": "BaseBdev3", 00:14:05.581 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:05.581 "is_configured": true, 00:14:05.581 "data_offset": 2048, 00:14:05.581 "data_size": 63488 00:14:05.581 }, 00:14:05.581 { 00:14:05.581 "name": "BaseBdev4", 00:14:05.581 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:05.581 "is_configured": true, 00:14:05.581 "data_offset": 2048, 00:14:05.581 "data_size": 63488 00:14:05.581 } 00:14:05.581 ] 00:14:05.581 }' 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:05.581 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.841 "name": "raid_bdev1", 00:14:05.841 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:05.841 "strip_size_kb": 0, 00:14:05.841 "state": "online", 00:14:05.841 "raid_level": "raid1", 00:14:05.841 "superblock": true, 00:14:05.841 "num_base_bdevs": 4, 00:14:05.841 "num_base_bdevs_discovered": 3, 00:14:05.841 "num_base_bdevs_operational": 3, 00:14:05.841 "base_bdevs_list": [ 00:14:05.841 { 00:14:05.841 "name": "spare", 00:14:05.841 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:05.841 "is_configured": true, 00:14:05.841 "data_offset": 2048, 00:14:05.841 "data_size": 63488 00:14:05.841 }, 00:14:05.841 { 00:14:05.841 "name": null, 00:14:05.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.841 "is_configured": false, 00:14:05.841 "data_offset": 0, 00:14:05.841 "data_size": 63488 00:14:05.841 }, 00:14:05.841 { 00:14:05.841 "name": "BaseBdev3", 00:14:05.841 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:05.841 "is_configured": true, 00:14:05.841 "data_offset": 2048, 00:14:05.841 "data_size": 63488 00:14:05.841 }, 00:14:05.841 { 00:14:05.841 "name": "BaseBdev4", 00:14:05.841 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:05.841 "is_configured": true, 00:14:05.841 "data_offset": 2048, 00:14:05.841 "data_size": 63488 00:14:05.841 } 00:14:05.841 ] 00:14:05.841 }' 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.841 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.101 [2024-12-14 12:40:05.715828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.101 [2024-12-14 12:40:05.715904] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.101 [2024-12-14 12:40:05.716019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.101 [2024-12-14 12:40:05.716143] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.101 [2024-12-14 12:40:05.716193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:06.101 12:40:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:06.361 /dev/nbd0 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.361 1+0 records in 00:14:06.361 1+0 records out 00:14:06.361 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571481 s, 7.2 MB/s 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:06.361 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:06.622 /dev/nbd1 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.622 1+0 records in 00:14:06.622 1+0 records out 00:14:06.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270649 s, 15.1 MB/s 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:06.622 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:06.882 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:06.882 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.882 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:06.882 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:06.882 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:06.882 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:06.882 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.142 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.402 [2024-12-14 12:40:06.885329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:07.402 [2024-12-14 12:40:06.885432] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:07.402 [2024-12-14 12:40:06.885474] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:07.402 [2024-12-14 12:40:06.885503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:07.402 [2024-12-14 12:40:06.887848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:07.402 [2024-12-14 12:40:06.887921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:07.402 [2024-12-14 12:40:06.888050] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:07.402 [2024-12-14 12:40:06.888126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:07.402 [2024-12-14 12:40:06.888323] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:07.402 [2024-12-14 12:40:06.888459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:07.402 spare 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.402 [2024-12-14 12:40:06.988409] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:07.402 [2024-12-14 12:40:06.988543] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:07.402 [2024-12-14 12:40:06.988917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:07.402 [2024-12-14 12:40:06.989165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:07.402 [2024-12-14 12:40:06.989180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:07.402 [2024-12-14 12:40:06.989394] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.402 12:40:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.402 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.402 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.402 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.402 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.403 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.403 "name": "raid_bdev1", 00:14:07.403 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:07.403 "strip_size_kb": 0, 00:14:07.403 "state": "online", 00:14:07.403 "raid_level": "raid1", 00:14:07.403 "superblock": true, 00:14:07.403 "num_base_bdevs": 4, 00:14:07.403 "num_base_bdevs_discovered": 3, 00:14:07.403 "num_base_bdevs_operational": 3, 00:14:07.403 "base_bdevs_list": [ 00:14:07.403 { 00:14:07.403 "name": "spare", 00:14:07.403 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:07.403 "is_configured": true, 00:14:07.403 "data_offset": 2048, 00:14:07.403 "data_size": 63488 00:14:07.403 }, 00:14:07.403 { 00:14:07.403 "name": null, 00:14:07.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.403 "is_configured": false, 00:14:07.403 "data_offset": 2048, 00:14:07.403 "data_size": 63488 00:14:07.403 }, 00:14:07.403 { 00:14:07.403 "name": "BaseBdev3", 00:14:07.403 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:07.403 "is_configured": true, 00:14:07.403 "data_offset": 2048, 00:14:07.403 "data_size": 63488 00:14:07.403 }, 00:14:07.403 { 00:14:07.403 "name": "BaseBdev4", 00:14:07.403 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:07.403 "is_configured": true, 00:14:07.403 "data_offset": 2048, 00:14:07.403 "data_size": 63488 00:14:07.403 } 00:14:07.403 ] 00:14:07.403 }' 00:14:07.403 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.403 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.973 "name": "raid_bdev1", 00:14:07.973 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:07.973 "strip_size_kb": 0, 00:14:07.973 "state": "online", 00:14:07.973 "raid_level": "raid1", 00:14:07.973 "superblock": true, 00:14:07.973 "num_base_bdevs": 4, 00:14:07.973 "num_base_bdevs_discovered": 3, 00:14:07.973 "num_base_bdevs_operational": 3, 00:14:07.973 "base_bdevs_list": [ 00:14:07.973 { 00:14:07.973 "name": "spare", 00:14:07.973 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:07.973 "is_configured": true, 00:14:07.973 "data_offset": 2048, 00:14:07.973 "data_size": 63488 00:14:07.973 }, 00:14:07.973 { 00:14:07.973 "name": null, 00:14:07.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.973 "is_configured": false, 00:14:07.973 "data_offset": 2048, 00:14:07.973 "data_size": 63488 00:14:07.973 }, 00:14:07.973 { 00:14:07.973 "name": "BaseBdev3", 00:14:07.973 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:07.973 "is_configured": true, 00:14:07.973 "data_offset": 2048, 00:14:07.973 "data_size": 63488 00:14:07.973 }, 00:14:07.973 { 00:14:07.973 "name": "BaseBdev4", 00:14:07.973 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:07.973 "is_configured": true, 00:14:07.973 "data_offset": 2048, 00:14:07.973 "data_size": 63488 00:14:07.973 } 00:14:07.973 ] 00:14:07.973 }' 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.973 [2024-12-14 12:40:07.648266] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.973 "name": "raid_bdev1", 00:14:07.973 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:07.973 "strip_size_kb": 0, 00:14:07.973 "state": "online", 00:14:07.973 "raid_level": "raid1", 00:14:07.973 "superblock": true, 00:14:07.973 "num_base_bdevs": 4, 00:14:07.973 "num_base_bdevs_discovered": 2, 00:14:07.973 "num_base_bdevs_operational": 2, 00:14:07.973 "base_bdevs_list": [ 00:14:07.973 { 00:14:07.973 "name": null, 00:14:07.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.973 "is_configured": false, 00:14:07.973 "data_offset": 0, 00:14:07.973 "data_size": 63488 00:14:07.973 }, 00:14:07.973 { 00:14:07.973 "name": null, 00:14:07.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.973 "is_configured": false, 00:14:07.973 "data_offset": 2048, 00:14:07.973 "data_size": 63488 00:14:07.973 }, 00:14:07.973 { 00:14:07.973 "name": "BaseBdev3", 00:14:07.973 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:07.973 "is_configured": true, 00:14:07.973 "data_offset": 2048, 00:14:07.973 "data_size": 63488 00:14:07.973 }, 00:14:07.973 { 00:14:07.973 "name": "BaseBdev4", 00:14:07.973 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:07.973 "is_configured": true, 00:14:07.973 "data_offset": 2048, 00:14:07.973 "data_size": 63488 00:14:07.973 } 00:14:07.973 ] 00:14:07.973 }' 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.973 12:40:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.543 12:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:08.543 12:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.543 12:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.543 [2024-12-14 12:40:08.139427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.543 [2024-12-14 12:40:08.139713] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:08.543 [2024-12-14 12:40:08.139787] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:08.543 [2024-12-14 12:40:08.139858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.543 [2024-12-14 12:40:08.154309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:08.543 12:40:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.543 12:40:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:08.543 [2024-12-14 12:40:08.156400] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.482 "name": "raid_bdev1", 00:14:09.482 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:09.482 "strip_size_kb": 0, 00:14:09.482 "state": "online", 00:14:09.482 "raid_level": "raid1", 00:14:09.482 "superblock": true, 00:14:09.482 "num_base_bdevs": 4, 00:14:09.482 "num_base_bdevs_discovered": 3, 00:14:09.482 "num_base_bdevs_operational": 3, 00:14:09.482 "process": { 00:14:09.482 "type": "rebuild", 00:14:09.482 "target": "spare", 00:14:09.482 "progress": { 00:14:09.482 "blocks": 20480, 00:14:09.482 "percent": 32 00:14:09.482 } 00:14:09.482 }, 00:14:09.482 "base_bdevs_list": [ 00:14:09.482 { 00:14:09.482 "name": "spare", 00:14:09.482 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:09.482 "is_configured": true, 00:14:09.482 "data_offset": 2048, 00:14:09.482 "data_size": 63488 00:14:09.482 }, 00:14:09.482 { 00:14:09.482 "name": null, 00:14:09.482 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.482 "is_configured": false, 00:14:09.482 "data_offset": 2048, 00:14:09.482 "data_size": 63488 00:14:09.482 }, 00:14:09.482 { 00:14:09.482 "name": "BaseBdev3", 00:14:09.482 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:09.482 "is_configured": true, 00:14:09.482 "data_offset": 2048, 00:14:09.482 "data_size": 63488 00:14:09.482 }, 00:14:09.482 { 00:14:09.482 "name": "BaseBdev4", 00:14:09.482 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:09.482 "is_configured": true, 00:14:09.482 "data_offset": 2048, 00:14:09.482 "data_size": 63488 00:14:09.482 } 00:14:09.482 ] 00:14:09.482 }' 00:14:09.482 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.742 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:09.742 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.742 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.743 [2024-12-14 12:40:09.315987] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.743 [2024-12-14 12:40:09.362190] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:09.743 [2024-12-14 12:40:09.362241] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.743 [2024-12-14 12:40:09.362259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.743 [2024-12-14 12:40:09.362266] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.743 "name": "raid_bdev1", 00:14:09.743 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:09.743 "strip_size_kb": 0, 00:14:09.743 "state": "online", 00:14:09.743 "raid_level": "raid1", 00:14:09.743 "superblock": true, 00:14:09.743 "num_base_bdevs": 4, 00:14:09.743 "num_base_bdevs_discovered": 2, 00:14:09.743 "num_base_bdevs_operational": 2, 00:14:09.743 "base_bdevs_list": [ 00:14:09.743 { 00:14:09.743 "name": null, 00:14:09.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.743 "is_configured": false, 00:14:09.743 "data_offset": 0, 00:14:09.743 "data_size": 63488 00:14:09.743 }, 00:14:09.743 { 00:14:09.743 "name": null, 00:14:09.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.743 "is_configured": false, 00:14:09.743 "data_offset": 2048, 00:14:09.743 "data_size": 63488 00:14:09.743 }, 00:14:09.743 { 00:14:09.743 "name": "BaseBdev3", 00:14:09.743 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:09.743 "is_configured": true, 00:14:09.743 "data_offset": 2048, 00:14:09.743 "data_size": 63488 00:14:09.743 }, 00:14:09.743 { 00:14:09.743 "name": "BaseBdev4", 00:14:09.743 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:09.743 "is_configured": true, 00:14:09.743 "data_offset": 2048, 00:14:09.743 "data_size": 63488 00:14:09.743 } 00:14:09.743 ] 00:14:09.743 }' 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.743 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.312 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:10.312 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.312 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.312 [2024-12-14 12:40:09.830987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:10.312 [2024-12-14 12:40:09.831134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:10.312 [2024-12-14 12:40:09.831188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:10.312 [2024-12-14 12:40:09.831221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:10.312 [2024-12-14 12:40:09.831791] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:10.312 [2024-12-14 12:40:09.831863] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:10.312 [2024-12-14 12:40:09.832013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:10.312 [2024-12-14 12:40:09.832073] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:10.312 [2024-12-14 12:40:09.832130] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:10.312 [2024-12-14 12:40:09.832184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:10.312 [2024-12-14 12:40:09.846728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:10.312 spare 00:14:10.312 12:40:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.312 [2024-12-14 12:40:09.848637] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.312 12:40:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:11.250 "name": "raid_bdev1", 00:14:11.250 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:11.250 "strip_size_kb": 0, 00:14:11.250 "state": "online", 00:14:11.250 "raid_level": "raid1", 00:14:11.250 "superblock": true, 00:14:11.250 "num_base_bdevs": 4, 00:14:11.250 "num_base_bdevs_discovered": 3, 00:14:11.250 "num_base_bdevs_operational": 3, 00:14:11.250 "process": { 00:14:11.250 "type": "rebuild", 00:14:11.250 "target": "spare", 00:14:11.250 "progress": { 00:14:11.250 "blocks": 20480, 00:14:11.250 "percent": 32 00:14:11.250 } 00:14:11.250 }, 00:14:11.250 "base_bdevs_list": [ 00:14:11.250 { 00:14:11.250 "name": "spare", 00:14:11.250 "uuid": "eae2e5ba-111e-5f5f-8358-8114193aba87", 00:14:11.250 "is_configured": true, 00:14:11.250 "data_offset": 2048, 00:14:11.250 "data_size": 63488 00:14:11.250 }, 00:14:11.250 { 00:14:11.250 "name": null, 00:14:11.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.250 "is_configured": false, 00:14:11.250 "data_offset": 2048, 00:14:11.250 "data_size": 63488 00:14:11.250 }, 00:14:11.250 { 00:14:11.250 "name": "BaseBdev3", 00:14:11.250 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:11.250 "is_configured": true, 00:14:11.250 "data_offset": 2048, 00:14:11.250 "data_size": 63488 00:14:11.250 }, 00:14:11.250 { 00:14:11.250 "name": "BaseBdev4", 00:14:11.250 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:11.250 "is_configured": true, 00:14:11.250 "data_offset": 2048, 00:14:11.250 "data_size": 63488 00:14:11.250 } 00:14:11.250 ] 00:14:11.250 }' 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.250 12:40:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.250 [2024-12-14 12:40:10.984093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.510 [2024-12-14 12:40:11.053999] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:11.510 [2024-12-14 12:40:11.054105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.510 [2024-12-14 12:40:11.054138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.510 [2024-12-14 12:40:11.054148] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.510 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.510 "name": "raid_bdev1", 00:14:11.510 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:11.510 "strip_size_kb": 0, 00:14:11.510 "state": "online", 00:14:11.510 "raid_level": "raid1", 00:14:11.510 "superblock": true, 00:14:11.511 "num_base_bdevs": 4, 00:14:11.511 "num_base_bdevs_discovered": 2, 00:14:11.511 "num_base_bdevs_operational": 2, 00:14:11.511 "base_bdevs_list": [ 00:14:11.511 { 00:14:11.511 "name": null, 00:14:11.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.511 "is_configured": false, 00:14:11.511 "data_offset": 0, 00:14:11.511 "data_size": 63488 00:14:11.511 }, 00:14:11.511 { 00:14:11.511 "name": null, 00:14:11.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.511 "is_configured": false, 00:14:11.511 "data_offset": 2048, 00:14:11.511 "data_size": 63488 00:14:11.511 }, 00:14:11.511 { 00:14:11.511 "name": "BaseBdev3", 00:14:11.511 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:11.511 "is_configured": true, 00:14:11.511 "data_offset": 2048, 00:14:11.511 "data_size": 63488 00:14:11.511 }, 00:14:11.511 { 00:14:11.511 "name": "BaseBdev4", 00:14:11.511 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:11.511 "is_configured": true, 00:14:11.511 "data_offset": 2048, 00:14:11.511 "data_size": 63488 00:14:11.511 } 00:14:11.511 ] 00:14:11.511 }' 00:14:11.511 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.511 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.770 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.030 "name": "raid_bdev1", 00:14:12.030 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:12.030 "strip_size_kb": 0, 00:14:12.030 "state": "online", 00:14:12.030 "raid_level": "raid1", 00:14:12.030 "superblock": true, 00:14:12.030 "num_base_bdevs": 4, 00:14:12.030 "num_base_bdevs_discovered": 2, 00:14:12.030 "num_base_bdevs_operational": 2, 00:14:12.030 "base_bdevs_list": [ 00:14:12.030 { 00:14:12.030 "name": null, 00:14:12.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.030 "is_configured": false, 00:14:12.030 "data_offset": 0, 00:14:12.030 "data_size": 63488 00:14:12.030 }, 00:14:12.030 { 00:14:12.030 "name": null, 00:14:12.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.030 "is_configured": false, 00:14:12.030 "data_offset": 2048, 00:14:12.030 "data_size": 63488 00:14:12.030 }, 00:14:12.030 { 00:14:12.030 "name": "BaseBdev3", 00:14:12.030 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:12.030 "is_configured": true, 00:14:12.030 "data_offset": 2048, 00:14:12.030 "data_size": 63488 00:14:12.030 }, 00:14:12.030 { 00:14:12.030 "name": "BaseBdev4", 00:14:12.030 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:12.030 "is_configured": true, 00:14:12.030 "data_offset": 2048, 00:14:12.030 "data_size": 63488 00:14:12.030 } 00:14:12.030 ] 00:14:12.030 }' 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.030 [2024-12-14 12:40:11.638366] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:12.030 [2024-12-14 12:40:11.638494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:12.030 [2024-12-14 12:40:11.638517] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:12.030 [2024-12-14 12:40:11.638528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:12.030 [2024-12-14 12:40:11.638964] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:12.030 [2024-12-14 12:40:11.638984] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:12.030 [2024-12-14 12:40:11.639069] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:12.030 [2024-12-14 12:40:11.639085] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:12.030 [2024-12-14 12:40:11.639093] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:12.030 [2024-12-14 12:40:11.639117] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:12.030 BaseBdev1 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.030 12:40:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.969 "name": "raid_bdev1", 00:14:12.969 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:12.969 "strip_size_kb": 0, 00:14:12.969 "state": "online", 00:14:12.969 "raid_level": "raid1", 00:14:12.969 "superblock": true, 00:14:12.969 "num_base_bdevs": 4, 00:14:12.969 "num_base_bdevs_discovered": 2, 00:14:12.969 "num_base_bdevs_operational": 2, 00:14:12.969 "base_bdevs_list": [ 00:14:12.969 { 00:14:12.969 "name": null, 00:14:12.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.969 "is_configured": false, 00:14:12.969 "data_offset": 0, 00:14:12.969 "data_size": 63488 00:14:12.969 }, 00:14:12.969 { 00:14:12.969 "name": null, 00:14:12.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.969 "is_configured": false, 00:14:12.969 "data_offset": 2048, 00:14:12.969 "data_size": 63488 00:14:12.969 }, 00:14:12.969 { 00:14:12.969 "name": "BaseBdev3", 00:14:12.969 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:12.969 "is_configured": true, 00:14:12.969 "data_offset": 2048, 00:14:12.969 "data_size": 63488 00:14:12.969 }, 00:14:12.969 { 00:14:12.969 "name": "BaseBdev4", 00:14:12.969 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:12.969 "is_configured": true, 00:14:12.969 "data_offset": 2048, 00:14:12.969 "data_size": 63488 00:14:12.969 } 00:14:12.969 ] 00:14:12.969 }' 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.969 12:40:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.544 "name": "raid_bdev1", 00:14:13.544 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:13.544 "strip_size_kb": 0, 00:14:13.544 "state": "online", 00:14:13.544 "raid_level": "raid1", 00:14:13.544 "superblock": true, 00:14:13.544 "num_base_bdevs": 4, 00:14:13.544 "num_base_bdevs_discovered": 2, 00:14:13.544 "num_base_bdevs_operational": 2, 00:14:13.544 "base_bdevs_list": [ 00:14:13.544 { 00:14:13.544 "name": null, 00:14:13.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.544 "is_configured": false, 00:14:13.544 "data_offset": 0, 00:14:13.544 "data_size": 63488 00:14:13.544 }, 00:14:13.544 { 00:14:13.544 "name": null, 00:14:13.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.544 "is_configured": false, 00:14:13.544 "data_offset": 2048, 00:14:13.544 "data_size": 63488 00:14:13.544 }, 00:14:13.544 { 00:14:13.544 "name": "BaseBdev3", 00:14:13.544 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:13.544 "is_configured": true, 00:14:13.544 "data_offset": 2048, 00:14:13.544 "data_size": 63488 00:14:13.544 }, 00:14:13.544 { 00:14:13.544 "name": "BaseBdev4", 00:14:13.544 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:13.544 "is_configured": true, 00:14:13.544 "data_offset": 2048, 00:14:13.544 "data_size": 63488 00:14:13.544 } 00:14:13.544 ] 00:14:13.544 }' 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.544 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.544 [2024-12-14 12:40:13.275559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.544 [2024-12-14 12:40:13.275833] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:13.544 [2024-12-14 12:40:13.275897] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:13.544 request: 00:14:13.544 { 00:14:13.544 "base_bdev": "BaseBdev1", 00:14:13.544 "raid_bdev": "raid_bdev1", 00:14:13.544 "method": "bdev_raid_add_base_bdev", 00:14:13.544 "req_id": 1 00:14:13.804 } 00:14:13.804 Got JSON-RPC error response 00:14:13.804 response: 00:14:13.804 { 00:14:13.804 "code": -22, 00:14:13.804 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:13.804 } 00:14:13.804 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:13.804 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:13.804 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:13.804 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:13.804 12:40:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:13.804 12:40:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.768 "name": "raid_bdev1", 00:14:14.768 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:14.768 "strip_size_kb": 0, 00:14:14.768 "state": "online", 00:14:14.768 "raid_level": "raid1", 00:14:14.768 "superblock": true, 00:14:14.768 "num_base_bdevs": 4, 00:14:14.768 "num_base_bdevs_discovered": 2, 00:14:14.768 "num_base_bdevs_operational": 2, 00:14:14.768 "base_bdevs_list": [ 00:14:14.768 { 00:14:14.768 "name": null, 00:14:14.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.768 "is_configured": false, 00:14:14.768 "data_offset": 0, 00:14:14.768 "data_size": 63488 00:14:14.768 }, 00:14:14.768 { 00:14:14.768 "name": null, 00:14:14.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.768 "is_configured": false, 00:14:14.768 "data_offset": 2048, 00:14:14.768 "data_size": 63488 00:14:14.768 }, 00:14:14.768 { 00:14:14.768 "name": "BaseBdev3", 00:14:14.768 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:14.768 "is_configured": true, 00:14:14.768 "data_offset": 2048, 00:14:14.768 "data_size": 63488 00:14:14.768 }, 00:14:14.768 { 00:14:14.768 "name": "BaseBdev4", 00:14:14.768 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:14.768 "is_configured": true, 00:14:14.768 "data_offset": 2048, 00:14:14.768 "data_size": 63488 00:14:14.768 } 00:14:14.768 ] 00:14:14.768 }' 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.768 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.028 "name": "raid_bdev1", 00:14:15.028 "uuid": "870a5ef8-6233-4be7-9a12-42ddf0bc051e", 00:14:15.028 "strip_size_kb": 0, 00:14:15.028 "state": "online", 00:14:15.028 "raid_level": "raid1", 00:14:15.028 "superblock": true, 00:14:15.028 "num_base_bdevs": 4, 00:14:15.028 "num_base_bdevs_discovered": 2, 00:14:15.028 "num_base_bdevs_operational": 2, 00:14:15.028 "base_bdevs_list": [ 00:14:15.028 { 00:14:15.028 "name": null, 00:14:15.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.028 "is_configured": false, 00:14:15.028 "data_offset": 0, 00:14:15.028 "data_size": 63488 00:14:15.028 }, 00:14:15.028 { 00:14:15.028 "name": null, 00:14:15.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.028 "is_configured": false, 00:14:15.028 "data_offset": 2048, 00:14:15.028 "data_size": 63488 00:14:15.028 }, 00:14:15.028 { 00:14:15.028 "name": "BaseBdev3", 00:14:15.028 "uuid": "3b48a753-a906-58f9-b30b-8b61c5997779", 00:14:15.028 "is_configured": true, 00:14:15.028 "data_offset": 2048, 00:14:15.028 "data_size": 63488 00:14:15.028 }, 00:14:15.028 { 00:14:15.028 "name": "BaseBdev4", 00:14:15.028 "uuid": "c370ddc6-01ea-563a-ba8d-cf81de408c40", 00:14:15.028 "is_configured": true, 00:14:15.028 "data_offset": 2048, 00:14:15.028 "data_size": 63488 00:14:15.028 } 00:14:15.028 ] 00:14:15.028 }' 00:14:15.028 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 79736 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 79736 ']' 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 79736 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79736 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79736' 00:14:15.288 killing process with pid 79736 00:14:15.288 Received shutdown signal, test time was about 60.000000 seconds 00:14:15.288 00:14:15.288 Latency(us) 00:14:15.288 [2024-12-14T12:40:15.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.288 [2024-12-14T12:40:15.026Z] =================================================================================================================== 00:14:15.288 [2024-12-14T12:40:15.026Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:15.288 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 79736 00:14:15.289 [2024-12-14 12:40:14.864767] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:15.289 12:40:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 79736 00:14:15.289 [2024-12-14 12:40:14.864899] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:15.289 [2024-12-14 12:40:14.864974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:15.289 [2024-12-14 12:40:14.864985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:15.858 [2024-12-14 12:40:15.348420] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:16.798 00:14:16.798 real 0m25.179s 00:14:16.798 user 0m30.669s 00:14:16.798 sys 0m3.686s 00:14:16.798 ************************************ 00:14:16.798 END TEST raid_rebuild_test_sb 00:14:16.798 ************************************ 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.798 12:40:16 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:16.798 12:40:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:16.798 12:40:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:16.798 12:40:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:16.798 ************************************ 00:14:16.798 START TEST raid_rebuild_test_io 00:14:16.798 ************************************ 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:16.798 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:17.058 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80491 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80491 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 80491 ']' 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:17.059 12:40:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.059 [2024-12-14 12:40:16.622836] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:17.059 [2024-12-14 12:40:16.623029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:17.059 Zero copy mechanism will not be used. 00:14:17.059 -allocations --file-prefix=spdk_pid80491 ] 00:14:17.318 [2024-12-14 12:40:16.797466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.318 [2024-12-14 12:40:16.910652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:17.578 [2024-12-14 12:40:17.100010] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.578 [2024-12-14 12:40:17.100152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 BaseBdev1_malloc 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.838 [2024-12-14 12:40:17.495975] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:17.838 [2024-12-14 12:40:17.496083] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.838 [2024-12-14 12:40:17.496125] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:17.838 [2024-12-14 12:40:17.496137] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.838 [2024-12-14 12:40:17.498098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.838 [2024-12-14 12:40:17.498139] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:17.838 BaseBdev1 00:14:17.838 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.839 BaseBdev2_malloc 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:17.839 [2024-12-14 12:40:17.549314] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:17.839 [2024-12-14 12:40:17.549408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:17.839 [2024-12-14 12:40:17.549447] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:17.839 [2024-12-14 12:40:17.549460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:17.839 [2024-12-14 12:40:17.551527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:17.839 [2024-12-14 12:40:17.551566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:17.839 BaseBdev2 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.839 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 BaseBdev3_malloc 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 [2024-12-14 12:40:17.619487] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:18.099 [2024-12-14 12:40:17.619582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.099 [2024-12-14 12:40:17.619608] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:18.099 [2024-12-14 12:40:17.619619] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.099 [2024-12-14 12:40:17.621659] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.099 [2024-12-14 12:40:17.621698] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:18.099 BaseBdev3 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 BaseBdev4_malloc 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 [2024-12-14 12:40:17.671455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:18.099 [2024-12-14 12:40:17.671514] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.099 [2024-12-14 12:40:17.671534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:18.099 [2024-12-14 12:40:17.671544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.099 [2024-12-14 12:40:17.673589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.099 [2024-12-14 12:40:17.673631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:18.099 BaseBdev4 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 spare_malloc 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 spare_delay 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.099 [2024-12-14 12:40:17.736948] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:18.099 [2024-12-14 12:40:17.736999] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:18.099 [2024-12-14 12:40:17.737015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:18.099 [2024-12-14 12:40:17.737025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:18.099 [2024-12-14 12:40:17.739102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:18.099 [2024-12-14 12:40:17.739133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:18.099 spare 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:18.099 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 [2024-12-14 12:40:17.748980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:18.100 [2024-12-14 12:40:17.750707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:18.100 [2024-12-14 12:40:17.750839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:18.100 [2024-12-14 12:40:17.750901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:18.100 [2024-12-14 12:40:17.750994] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:18.100 [2024-12-14 12:40:17.751010] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:18.100 [2024-12-14 12:40:17.751260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:18.100 [2024-12-14 12:40:17.751426] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:18.100 [2024-12-14 12:40:17.751449] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:18.100 [2024-12-14 12:40:17.751587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.100 "name": "raid_bdev1", 00:14:18.100 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:18.100 "strip_size_kb": 0, 00:14:18.100 "state": "online", 00:14:18.100 "raid_level": "raid1", 00:14:18.100 "superblock": false, 00:14:18.100 "num_base_bdevs": 4, 00:14:18.100 "num_base_bdevs_discovered": 4, 00:14:18.100 "num_base_bdevs_operational": 4, 00:14:18.100 "base_bdevs_list": [ 00:14:18.100 { 00:14:18.100 "name": "BaseBdev1", 00:14:18.100 "uuid": "f444f9c5-7d9c-5a60-9bf5-62b3de237ccb", 00:14:18.100 "is_configured": true, 00:14:18.100 "data_offset": 0, 00:14:18.100 "data_size": 65536 00:14:18.100 }, 00:14:18.100 { 00:14:18.100 "name": "BaseBdev2", 00:14:18.100 "uuid": "1881f9b6-43f8-5cf1-bcd5-1bab66241c19", 00:14:18.100 "is_configured": true, 00:14:18.100 "data_offset": 0, 00:14:18.100 "data_size": 65536 00:14:18.100 }, 00:14:18.100 { 00:14:18.100 "name": "BaseBdev3", 00:14:18.100 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:18.100 "is_configured": true, 00:14:18.100 "data_offset": 0, 00:14:18.100 "data_size": 65536 00:14:18.100 }, 00:14:18.100 { 00:14:18.100 "name": "BaseBdev4", 00:14:18.100 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:18.100 "is_configured": true, 00:14:18.100 "data_offset": 0, 00:14:18.100 "data_size": 65536 00:14:18.100 } 00:14:18.100 ] 00:14:18.100 }' 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.100 12:40:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.670 [2024-12-14 12:40:18.208559] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.670 [2024-12-14 12:40:18.272065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.670 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.670 "name": "raid_bdev1", 00:14:18.670 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:18.670 "strip_size_kb": 0, 00:14:18.670 "state": "online", 00:14:18.670 "raid_level": "raid1", 00:14:18.670 "superblock": false, 00:14:18.670 "num_base_bdevs": 4, 00:14:18.670 "num_base_bdevs_discovered": 3, 00:14:18.670 "num_base_bdevs_operational": 3, 00:14:18.670 "base_bdevs_list": [ 00:14:18.670 { 00:14:18.670 "name": null, 00:14:18.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.670 "is_configured": false, 00:14:18.670 "data_offset": 0, 00:14:18.670 "data_size": 65536 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "name": "BaseBdev2", 00:14:18.670 "uuid": "1881f9b6-43f8-5cf1-bcd5-1bab66241c19", 00:14:18.670 "is_configured": true, 00:14:18.670 "data_offset": 0, 00:14:18.670 "data_size": 65536 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "name": "BaseBdev3", 00:14:18.670 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:18.670 "is_configured": true, 00:14:18.670 "data_offset": 0, 00:14:18.670 "data_size": 65536 00:14:18.670 }, 00:14:18.670 { 00:14:18.670 "name": "BaseBdev4", 00:14:18.670 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:18.670 "is_configured": true, 00:14:18.670 "data_offset": 0, 00:14:18.670 "data_size": 65536 00:14:18.670 } 00:14:18.670 ] 00:14:18.670 }' 00:14:18.671 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.671 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.671 [2024-12-14 12:40:18.367663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:18.671 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:18.671 Zero copy mechanism will not be used. 00:14:18.671 Running I/O for 60 seconds... 00:14:19.238 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:19.238 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.239 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:19.239 [2024-12-14 12:40:18.723344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:19.239 12:40:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.239 12:40:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:19.239 [2024-12-14 12:40:18.775901] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:19.239 [2024-12-14 12:40:18.777889] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:19.239 [2024-12-14 12:40:18.886013] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:19.239 [2024-12-14 12:40:18.887529] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:19.499 [2024-12-14 12:40:19.103920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:19.499 [2024-12-14 12:40:19.104708] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:19.758 158.00 IOPS, 474.00 MiB/s [2024-12-14T12:40:19.496Z] [2024-12-14 12:40:19.447365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:19.758 [2024-12-14 12:40:19.448824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:20.018 [2024-12-14 12:40:19.662223] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:20.278 "name": "raid_bdev1", 00:14:20.278 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:20.278 "strip_size_kb": 0, 00:14:20.278 "state": "online", 00:14:20.278 "raid_level": "raid1", 00:14:20.278 "superblock": false, 00:14:20.278 "num_base_bdevs": 4, 00:14:20.278 "num_base_bdevs_discovered": 4, 00:14:20.278 "num_base_bdevs_operational": 4, 00:14:20.278 "process": { 00:14:20.278 "type": "rebuild", 00:14:20.278 "target": "spare", 00:14:20.278 "progress": { 00:14:20.278 "blocks": 10240, 00:14:20.278 "percent": 15 00:14:20.278 } 00:14:20.278 }, 00:14:20.278 "base_bdevs_list": [ 00:14:20.278 { 00:14:20.278 "name": "spare", 00:14:20.278 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:20.278 "is_configured": true, 00:14:20.278 "data_offset": 0, 00:14:20.278 "data_size": 65536 00:14:20.278 }, 00:14:20.278 { 00:14:20.278 "name": "BaseBdev2", 00:14:20.278 "uuid": "1881f9b6-43f8-5cf1-bcd5-1bab66241c19", 00:14:20.278 "is_configured": true, 00:14:20.278 "data_offset": 0, 00:14:20.278 "data_size": 65536 00:14:20.278 }, 00:14:20.278 { 00:14:20.278 "name": "BaseBdev3", 00:14:20.278 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:20.278 "is_configured": true, 00:14:20.278 "data_offset": 0, 00:14:20.278 "data_size": 65536 00:14:20.278 }, 00:14:20.278 { 00:14:20.278 "name": "BaseBdev4", 00:14:20.278 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:20.278 "is_configured": true, 00:14:20.278 "data_offset": 0, 00:14:20.278 "data_size": 65536 00:14:20.278 } 00:14:20.278 ] 00:14:20.278 }' 00:14:20.278 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:20.279 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:20.279 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:20.279 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:20.279 12:40:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:20.279 12:40:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.279 12:40:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.279 [2024-12-14 12:40:19.926460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.279 [2024-12-14 12:40:19.987115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:20.539 [2024-12-14 12:40:20.087681] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:20.539 [2024-12-14 12:40:20.097230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.539 [2024-12-14 12:40:20.097338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:20.539 [2024-12-14 12:40:20.097360] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:20.539 [2024-12-14 12:40:20.120399] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.539 "name": "raid_bdev1", 00:14:20.539 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:20.539 "strip_size_kb": 0, 00:14:20.539 "state": "online", 00:14:20.539 "raid_level": "raid1", 00:14:20.539 "superblock": false, 00:14:20.539 "num_base_bdevs": 4, 00:14:20.539 "num_base_bdevs_discovered": 3, 00:14:20.539 "num_base_bdevs_operational": 3, 00:14:20.539 "base_bdevs_list": [ 00:14:20.539 { 00:14:20.539 "name": null, 00:14:20.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.539 "is_configured": false, 00:14:20.539 "data_offset": 0, 00:14:20.539 "data_size": 65536 00:14:20.539 }, 00:14:20.539 { 00:14:20.539 "name": "BaseBdev2", 00:14:20.539 "uuid": "1881f9b6-43f8-5cf1-bcd5-1bab66241c19", 00:14:20.539 "is_configured": true, 00:14:20.539 "data_offset": 0, 00:14:20.539 "data_size": 65536 00:14:20.539 }, 00:14:20.539 { 00:14:20.539 "name": "BaseBdev3", 00:14:20.539 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:20.539 "is_configured": true, 00:14:20.539 "data_offset": 0, 00:14:20.539 "data_size": 65536 00:14:20.539 }, 00:14:20.539 { 00:14:20.539 "name": "BaseBdev4", 00:14:20.539 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:20.539 "is_configured": true, 00:14:20.539 "data_offset": 0, 00:14:20.539 "data_size": 65536 00:14:20.539 } 00:14:20.539 ] 00:14:20.539 }' 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.539 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.058 129.50 IOPS, 388.50 MiB/s [2024-12-14T12:40:20.796Z] 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:21.058 "name": "raid_bdev1", 00:14:21.058 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:21.058 "strip_size_kb": 0, 00:14:21.058 "state": "online", 00:14:21.058 "raid_level": "raid1", 00:14:21.058 "superblock": false, 00:14:21.058 "num_base_bdevs": 4, 00:14:21.058 "num_base_bdevs_discovered": 3, 00:14:21.058 "num_base_bdevs_operational": 3, 00:14:21.058 "base_bdevs_list": [ 00:14:21.058 { 00:14:21.058 "name": null, 00:14:21.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.058 "is_configured": false, 00:14:21.058 "data_offset": 0, 00:14:21.058 "data_size": 65536 00:14:21.058 }, 00:14:21.058 { 00:14:21.058 "name": "BaseBdev2", 00:14:21.058 "uuid": "1881f9b6-43f8-5cf1-bcd5-1bab66241c19", 00:14:21.058 "is_configured": true, 00:14:21.058 "data_offset": 0, 00:14:21.058 "data_size": 65536 00:14:21.058 }, 00:14:21.058 { 00:14:21.058 "name": "BaseBdev3", 00:14:21.058 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:21.058 "is_configured": true, 00:14:21.058 "data_offset": 0, 00:14:21.058 "data_size": 65536 00:14:21.058 }, 00:14:21.058 { 00:14:21.058 "name": "BaseBdev4", 00:14:21.058 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:21.058 "is_configured": true, 00:14:21.058 "data_offset": 0, 00:14:21.058 "data_size": 65536 00:14:21.058 } 00:14:21.058 ] 00:14:21.058 }' 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:21.058 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:21.059 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:21.059 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:21.059 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.059 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:21.059 [2024-12-14 12:40:20.718644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:21.059 12:40:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.059 12:40:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:21.059 [2024-12-14 12:40:20.766196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:21.059 [2024-12-14 12:40:20.768153] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:21.318 [2024-12-14 12:40:20.870603] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:21.319 [2024-12-14 12:40:20.872129] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:21.584 [2024-12-14 12:40:21.082597] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:21.584 [2024-12-14 12:40:21.083387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:21.852 145.00 IOPS, 435.00 MiB/s [2024-12-14T12:40:21.590Z] [2024-12-14 12:40:21.549635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.112 "name": "raid_bdev1", 00:14:22.112 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:22.112 "strip_size_kb": 0, 00:14:22.112 "state": "online", 00:14:22.112 "raid_level": "raid1", 00:14:22.112 "superblock": false, 00:14:22.112 "num_base_bdevs": 4, 00:14:22.112 "num_base_bdevs_discovered": 4, 00:14:22.112 "num_base_bdevs_operational": 4, 00:14:22.112 "process": { 00:14:22.112 "type": "rebuild", 00:14:22.112 "target": "spare", 00:14:22.112 "progress": { 00:14:22.112 "blocks": 12288, 00:14:22.112 "percent": 18 00:14:22.112 } 00:14:22.112 }, 00:14:22.112 "base_bdevs_list": [ 00:14:22.112 { 00:14:22.112 "name": "spare", 00:14:22.112 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:22.112 "is_configured": true, 00:14:22.112 "data_offset": 0, 00:14:22.112 "data_size": 65536 00:14:22.112 }, 00:14:22.112 { 00:14:22.112 "name": "BaseBdev2", 00:14:22.112 "uuid": "1881f9b6-43f8-5cf1-bcd5-1bab66241c19", 00:14:22.112 "is_configured": true, 00:14:22.112 "data_offset": 0, 00:14:22.112 "data_size": 65536 00:14:22.112 }, 00:14:22.112 { 00:14:22.112 "name": "BaseBdev3", 00:14:22.112 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:22.112 "is_configured": true, 00:14:22.112 "data_offset": 0, 00:14:22.112 "data_size": 65536 00:14:22.112 }, 00:14:22.112 { 00:14:22.112 "name": "BaseBdev4", 00:14:22.112 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:22.112 "is_configured": true, 00:14:22.112 "data_offset": 0, 00:14:22.112 "data_size": 65536 00:14:22.112 } 00:14:22.112 ] 00:14:22.112 }' 00:14:22.112 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.372 12:40:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.372 [2024-12-14 12:40:21.907060] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.372 [2024-12-14 12:40:22.027497] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:22.372 [2024-12-14 12:40:22.027596] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.372 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.372 "name": "raid_bdev1", 00:14:22.372 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:22.372 "strip_size_kb": 0, 00:14:22.372 "state": "online", 00:14:22.372 "raid_level": "raid1", 00:14:22.372 "superblock": false, 00:14:22.372 "num_base_bdevs": 4, 00:14:22.372 "num_base_bdevs_discovered": 3, 00:14:22.372 "num_base_bdevs_operational": 3, 00:14:22.372 "process": { 00:14:22.372 "type": "rebuild", 00:14:22.372 "target": "spare", 00:14:22.373 "progress": { 00:14:22.373 "blocks": 14336, 00:14:22.373 "percent": 21 00:14:22.373 } 00:14:22.373 }, 00:14:22.373 "base_bdevs_list": [ 00:14:22.373 { 00:14:22.373 "name": "spare", 00:14:22.373 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:22.373 "is_configured": true, 00:14:22.373 "data_offset": 0, 00:14:22.373 "data_size": 65536 00:14:22.373 }, 00:14:22.373 { 00:14:22.373 "name": null, 00:14:22.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.373 "is_configured": false, 00:14:22.373 "data_offset": 0, 00:14:22.373 "data_size": 65536 00:14:22.373 }, 00:14:22.373 { 00:14:22.373 "name": "BaseBdev3", 00:14:22.373 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:22.373 "is_configured": true, 00:14:22.373 "data_offset": 0, 00:14:22.373 "data_size": 65536 00:14:22.373 }, 00:14:22.373 { 00:14:22.373 "name": "BaseBdev4", 00:14:22.373 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:22.373 "is_configured": true, 00:14:22.373 "data_offset": 0, 00:14:22.373 "data_size": 65536 00:14:22.373 } 00:14:22.373 ] 00:14:22.373 }' 00:14:22.373 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.633 [2024-12-14 12:40:22.164819] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=477 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:22.633 "name": "raid_bdev1", 00:14:22.633 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:22.633 "strip_size_kb": 0, 00:14:22.633 "state": "online", 00:14:22.633 "raid_level": "raid1", 00:14:22.633 "superblock": false, 00:14:22.633 "num_base_bdevs": 4, 00:14:22.633 "num_base_bdevs_discovered": 3, 00:14:22.633 "num_base_bdevs_operational": 3, 00:14:22.633 "process": { 00:14:22.633 "type": "rebuild", 00:14:22.633 "target": "spare", 00:14:22.633 "progress": { 00:14:22.633 "blocks": 16384, 00:14:22.633 "percent": 25 00:14:22.633 } 00:14:22.633 }, 00:14:22.633 "base_bdevs_list": [ 00:14:22.633 { 00:14:22.633 "name": "spare", 00:14:22.633 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:22.633 "is_configured": true, 00:14:22.633 "data_offset": 0, 00:14:22.633 "data_size": 65536 00:14:22.633 }, 00:14:22.633 { 00:14:22.633 "name": null, 00:14:22.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.633 "is_configured": false, 00:14:22.633 "data_offset": 0, 00:14:22.633 "data_size": 65536 00:14:22.633 }, 00:14:22.633 { 00:14:22.633 "name": "BaseBdev3", 00:14:22.633 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:22.633 "is_configured": true, 00:14:22.633 "data_offset": 0, 00:14:22.633 "data_size": 65536 00:14:22.633 }, 00:14:22.633 { 00:14:22.633 "name": "BaseBdev4", 00:14:22.633 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:22.633 "is_configured": true, 00:14:22.633 "data_offset": 0, 00:14:22.633 "data_size": 65536 00:14:22.633 } 00:14:22.633 ] 00:14:22.633 }' 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:22.633 12:40:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:22.893 121.25 IOPS, 363.75 MiB/s [2024-12-14T12:40:22.631Z] [2024-12-14 12:40:22.483097] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:22.893 [2024-12-14 12:40:22.603512] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:23.462 [2024-12-14 12:40:22.966822] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:23.462 [2024-12-14 12:40:23.189083] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:23.462 [2024-12-14 12:40:23.189607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.722 12:40:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.722 106.00 IOPS, 318.00 MiB/s [2024-12-14T12:40:23.460Z] 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:23.722 "name": "raid_bdev1", 00:14:23.722 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:23.722 "strip_size_kb": 0, 00:14:23.722 "state": "online", 00:14:23.722 "raid_level": "raid1", 00:14:23.722 "superblock": false, 00:14:23.722 "num_base_bdevs": 4, 00:14:23.722 "num_base_bdevs_discovered": 3, 00:14:23.722 "num_base_bdevs_operational": 3, 00:14:23.722 "process": { 00:14:23.722 "type": "rebuild", 00:14:23.722 "target": "spare", 00:14:23.722 "progress": { 00:14:23.722 "blocks": 28672, 00:14:23.722 "percent": 43 00:14:23.722 } 00:14:23.722 }, 00:14:23.722 "base_bdevs_list": [ 00:14:23.722 { 00:14:23.722 "name": "spare", 00:14:23.722 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:23.722 "is_configured": true, 00:14:23.722 "data_offset": 0, 00:14:23.722 "data_size": 65536 00:14:23.722 }, 00:14:23.722 { 00:14:23.722 "name": null, 00:14:23.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.722 "is_configured": false, 00:14:23.722 "data_offset": 0, 00:14:23.722 "data_size": 65536 00:14:23.722 }, 00:14:23.722 { 00:14:23.722 "name": "BaseBdev3", 00:14:23.722 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:23.722 "is_configured": true, 00:14:23.722 "data_offset": 0, 00:14:23.722 "data_size": 65536 00:14:23.722 }, 00:14:23.722 { 00:14:23.722 "name": "BaseBdev4", 00:14:23.722 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:23.722 "is_configured": true, 00:14:23.722 "data_offset": 0, 00:14:23.722 "data_size": 65536 00:14:23.722 } 00:14:23.723 ] 00:14:23.723 }' 00:14:23.723 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:23.723 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:23.723 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:23.982 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:23.982 12:40:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:23.982 [2024-12-14 12:40:23.493989] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:23.982 [2024-12-14 12:40:23.494584] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:23.982 [2024-12-14 12:40:23.617104] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:24.242 [2024-12-14 12:40:23.952036] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:24.812 95.17 IOPS, 285.50 MiB/s [2024-12-14T12:40:24.550Z] [2024-12-14 12:40:24.396081] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.812 [2024-12-14 12:40:24.521418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.812 "name": "raid_bdev1", 00:14:24.812 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:24.812 "strip_size_kb": 0, 00:14:24.812 "state": "online", 00:14:24.812 "raid_level": "raid1", 00:14:24.812 "superblock": false, 00:14:24.812 "num_base_bdevs": 4, 00:14:24.812 "num_base_bdevs_discovered": 3, 00:14:24.812 "num_base_bdevs_operational": 3, 00:14:24.812 "process": { 00:14:24.812 "type": "rebuild", 00:14:24.812 "target": "spare", 00:14:24.812 "progress": { 00:14:24.812 "blocks": 45056, 00:14:24.812 "percent": 68 00:14:24.812 } 00:14:24.812 }, 00:14:24.812 "base_bdevs_list": [ 00:14:24.812 { 00:14:24.812 "name": "spare", 00:14:24.812 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:24.812 "is_configured": true, 00:14:24.812 "data_offset": 0, 00:14:24.812 "data_size": 65536 00:14:24.812 }, 00:14:24.812 { 00:14:24.812 "name": null, 00:14:24.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.812 "is_configured": false, 00:14:24.812 "data_offset": 0, 00:14:24.812 "data_size": 65536 00:14:24.812 }, 00:14:24.812 { 00:14:24.812 "name": "BaseBdev3", 00:14:24.812 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:24.812 "is_configured": true, 00:14:24.812 "data_offset": 0, 00:14:24.812 "data_size": 65536 00:14:24.812 }, 00:14:24.812 { 00:14:24.812 "name": "BaseBdev4", 00:14:24.812 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:24.812 "is_configured": true, 00:14:24.812 "data_offset": 0, 00:14:24.812 "data_size": 65536 00:14:24.812 } 00:14:24.812 ] 00:14:24.812 }' 00:14:24.812 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:25.072 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:25.072 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:25.072 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:25.072 12:40:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:25.900 86.86 IOPS, 260.57 MiB/s [2024-12-14T12:40:25.638Z] [2024-12-14 12:40:25.615034] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.900 12:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.160 12:40:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.160 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.160 "name": "raid_bdev1", 00:14:26.160 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:26.160 "strip_size_kb": 0, 00:14:26.160 "state": "online", 00:14:26.160 "raid_level": "raid1", 00:14:26.160 "superblock": false, 00:14:26.160 "num_base_bdevs": 4, 00:14:26.160 "num_base_bdevs_discovered": 3, 00:14:26.160 "num_base_bdevs_operational": 3, 00:14:26.160 "process": { 00:14:26.160 "type": "rebuild", 00:14:26.160 "target": "spare", 00:14:26.160 "progress": { 00:14:26.160 "blocks": 65536, 00:14:26.160 "percent": 100 00:14:26.160 } 00:14:26.160 }, 00:14:26.160 "base_bdevs_list": [ 00:14:26.160 { 00:14:26.160 "name": "spare", 00:14:26.160 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:26.160 "is_configured": true, 00:14:26.160 "data_offset": 0, 00:14:26.160 "data_size": 65536 00:14:26.160 }, 00:14:26.160 { 00:14:26.160 "name": null, 00:14:26.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.160 "is_configured": false, 00:14:26.160 "data_offset": 0, 00:14:26.160 "data_size": 65536 00:14:26.160 }, 00:14:26.160 { 00:14:26.160 "name": "BaseBdev3", 00:14:26.160 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:26.160 "is_configured": true, 00:14:26.160 "data_offset": 0, 00:14:26.160 "data_size": 65536 00:14:26.160 }, 00:14:26.160 { 00:14:26.160 "name": "BaseBdev4", 00:14:26.160 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:26.160 "is_configured": true, 00:14:26.160 "data_offset": 0, 00:14:26.160 "data_size": 65536 00:14:26.160 } 00:14:26.160 ] 00:14:26.160 }' 00:14:26.160 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.160 [2024-12-14 12:40:25.714884] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:26.160 [2024-12-14 12:40:25.717716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.160 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.160 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.160 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.160 12:40:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.297 80.50 IOPS, 241.50 MiB/s [2024-12-14T12:40:27.035Z] 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.297 "name": "raid_bdev1", 00:14:27.297 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:27.297 "strip_size_kb": 0, 00:14:27.297 "state": "online", 00:14:27.297 "raid_level": "raid1", 00:14:27.297 "superblock": false, 00:14:27.297 "num_base_bdevs": 4, 00:14:27.297 "num_base_bdevs_discovered": 3, 00:14:27.297 "num_base_bdevs_operational": 3, 00:14:27.297 "base_bdevs_list": [ 00:14:27.297 { 00:14:27.297 "name": "spare", 00:14:27.297 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:27.297 "is_configured": true, 00:14:27.297 "data_offset": 0, 00:14:27.297 "data_size": 65536 00:14:27.297 }, 00:14:27.297 { 00:14:27.297 "name": null, 00:14:27.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.297 "is_configured": false, 00:14:27.297 "data_offset": 0, 00:14:27.297 "data_size": 65536 00:14:27.297 }, 00:14:27.297 { 00:14:27.297 "name": "BaseBdev3", 00:14:27.297 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:27.297 "is_configured": true, 00:14:27.297 "data_offset": 0, 00:14:27.297 "data_size": 65536 00:14:27.297 }, 00:14:27.297 { 00:14:27.297 "name": "BaseBdev4", 00:14:27.297 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:27.297 "is_configured": true, 00:14:27.297 "data_offset": 0, 00:14:27.297 "data_size": 65536 00:14:27.297 } 00:14:27.297 ] 00:14:27.297 }' 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.297 "name": "raid_bdev1", 00:14:27.297 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:27.297 "strip_size_kb": 0, 00:14:27.297 "state": "online", 00:14:27.297 "raid_level": "raid1", 00:14:27.297 "superblock": false, 00:14:27.297 "num_base_bdevs": 4, 00:14:27.297 "num_base_bdevs_discovered": 3, 00:14:27.297 "num_base_bdevs_operational": 3, 00:14:27.297 "base_bdevs_list": [ 00:14:27.297 { 00:14:27.297 "name": "spare", 00:14:27.297 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:27.297 "is_configured": true, 00:14:27.297 "data_offset": 0, 00:14:27.297 "data_size": 65536 00:14:27.297 }, 00:14:27.297 { 00:14:27.297 "name": null, 00:14:27.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.297 "is_configured": false, 00:14:27.297 "data_offset": 0, 00:14:27.297 "data_size": 65536 00:14:27.297 }, 00:14:27.297 { 00:14:27.297 "name": "BaseBdev3", 00:14:27.297 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:27.297 "is_configured": true, 00:14:27.297 "data_offset": 0, 00:14:27.297 "data_size": 65536 00:14:27.297 }, 00:14:27.297 { 00:14:27.297 "name": "BaseBdev4", 00:14:27.297 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:27.297 "is_configured": true, 00:14:27.297 "data_offset": 0, 00:14:27.297 "data_size": 65536 00:14:27.297 } 00:14:27.297 ] 00:14:27.297 }' 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:27.297 12:40:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.297 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.557 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.557 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.557 "name": "raid_bdev1", 00:14:27.557 "uuid": "27b266c8-a39c-4680-8d4f-7620d1c28071", 00:14:27.557 "strip_size_kb": 0, 00:14:27.557 "state": "online", 00:14:27.557 "raid_level": "raid1", 00:14:27.557 "superblock": false, 00:14:27.557 "num_base_bdevs": 4, 00:14:27.557 "num_base_bdevs_discovered": 3, 00:14:27.557 "num_base_bdevs_operational": 3, 00:14:27.557 "base_bdevs_list": [ 00:14:27.557 { 00:14:27.557 "name": "spare", 00:14:27.557 "uuid": "dc21e02c-014b-5b48-b8c1-9b8b6d0517c2", 00:14:27.557 "is_configured": true, 00:14:27.557 "data_offset": 0, 00:14:27.557 "data_size": 65536 00:14:27.557 }, 00:14:27.557 { 00:14:27.557 "name": null, 00:14:27.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.557 "is_configured": false, 00:14:27.557 "data_offset": 0, 00:14:27.557 "data_size": 65536 00:14:27.557 }, 00:14:27.557 { 00:14:27.557 "name": "BaseBdev3", 00:14:27.557 "uuid": "83aa53e6-330c-5485-ae7b-f8e129765ac6", 00:14:27.557 "is_configured": true, 00:14:27.557 "data_offset": 0, 00:14:27.557 "data_size": 65536 00:14:27.557 }, 00:14:27.557 { 00:14:27.557 "name": "BaseBdev4", 00:14:27.557 "uuid": "c96bb159-2453-5a79-a845-8879dcef179c", 00:14:27.557 "is_configured": true, 00:14:27.557 "data_offset": 0, 00:14:27.557 "data_size": 65536 00:14:27.557 } 00:14:27.557 ] 00:14:27.557 }' 00:14:27.557 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.557 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.817 74.89 IOPS, 224.67 MiB/s [2024-12-14T12:40:27.555Z] 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:27.817 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.817 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.817 [2024-12-14 12:40:27.469343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:27.817 [2024-12-14 12:40:27.469415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:27.817 00:14:27.817 Latency(us) 00:14:27.817 [2024-12-14T12:40:27.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.817 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:27.817 raid_bdev1 : 9.20 74.15 222.46 0.00 0.00 19254.90 348.79 117220.72 00:14:27.817 [2024-12-14T12:40:27.555Z] =================================================================================================================== 00:14:27.817 [2024-12-14T12:40:27.555Z] Total : 74.15 222.46 0.00 0.00 19254.90 348.79 117220.72 00:14:28.077 [2024-12-14 12:40:27.570512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:28.077 [2024-12-14 12:40:27.570633] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:28.077 [2024-12-14 12:40:27.570747] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:28.077 [2024-12-14 12:40:27.570794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:28.077 { 00:14:28.077 "results": [ 00:14:28.077 { 00:14:28.077 "job": "raid_bdev1", 00:14:28.077 "core_mask": "0x1", 00:14:28.077 "workload": "randrw", 00:14:28.077 "percentage": 50, 00:14:28.077 "status": "finished", 00:14:28.077 "queue_depth": 2, 00:14:28.077 "io_size": 3145728, 00:14:28.077 "runtime": 9.197092, 00:14:28.077 "iops": 74.15387385490979, 00:14:28.077 "mibps": 222.46162156472937, 00:14:28.077 "io_failed": 0, 00:14:28.077 "io_timeout": 0, 00:14:28.077 "avg_latency_us": 19254.904491029465, 00:14:28.077 "min_latency_us": 348.7860262008734, 00:14:28.077 "max_latency_us": 117220.7231441048 00:14:28.077 } 00:14:28.077 ], 00:14:28.077 "core_count": 1 00:14:28.077 } 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.077 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:28.077 /dev/nbd0 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.337 1+0 records in 00:14:28.337 1+0 records out 00:14:28.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385552 s, 10.6 MB/s 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.337 12:40:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:28.337 /dev/nbd1 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:28.598 1+0 records in 00:14:28.598 1+0 records out 00:14:28.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475274 s, 8.6 MB/s 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:28.598 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:28.858 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:29.117 /dev/nbd1 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:29.118 1+0 records in 00:14:29.118 1+0 records out 00:14:29.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336306 s, 12.2 MB/s 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.118 12:40:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:29.378 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 80491 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 80491 ']' 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 80491 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80491 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80491' 00:14:29.637 killing process with pid 80491 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 80491 00:14:29.637 Received shutdown signal, test time was about 10.956149 seconds 00:14:29.637 00:14:29.637 Latency(us) 00:14:29.637 [2024-12-14T12:40:29.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:29.637 [2024-12-14T12:40:29.375Z] =================================================================================================================== 00:14:29.637 [2024-12-14T12:40:29.375Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:29.637 [2024-12-14 12:40:29.304840] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:29.637 12:40:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 80491 00:14:30.207 [2024-12-14 12:40:29.708807] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:31.146 12:40:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:31.146 00:14:31.146 real 0m14.302s 00:14:31.146 user 0m17.863s 00:14:31.146 sys 0m1.770s 00:14:31.146 12:40:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:31.146 12:40:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.146 ************************************ 00:14:31.146 END TEST raid_rebuild_test_io 00:14:31.146 ************************************ 00:14:31.146 12:40:30 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:31.146 12:40:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:31.406 12:40:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:31.406 12:40:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:31.406 ************************************ 00:14:31.406 START TEST raid_rebuild_test_sb_io 00:14:31.406 ************************************ 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=80919 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 80919 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 80919 ']' 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:31.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:31.406 12:40:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:31.406 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:31.406 Zero copy mechanism will not be used. 00:14:31.406 [2024-12-14 12:40:30.994577] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:31.406 [2024-12-14 12:40:30.994688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80919 ] 00:14:31.666 [2024-12-14 12:40:31.154551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:31.666 [2024-12-14 12:40:31.263971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.926 [2024-12-14 12:40:31.449025] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:31.926 [2024-12-14 12:40:31.449093] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 BaseBdev1_malloc 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 [2024-12-14 12:40:31.853643] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:32.186 [2024-12-14 12:40:31.853699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.186 [2024-12-14 12:40:31.853721] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:32.186 [2024-12-14 12:40:31.853732] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.186 [2024-12-14 12:40:31.855744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.186 [2024-12-14 12:40:31.855781] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:32.186 BaseBdev1 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 BaseBdev2_malloc 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.186 [2024-12-14 12:40:31.908237] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:32.186 [2024-12-14 12:40:31.908293] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.186 [2024-12-14 12:40:31.908313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:32.186 [2024-12-14 12:40:31.908324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.186 [2024-12-14 12:40:31.910303] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.186 [2024-12-14 12:40:31.910337] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:32.186 BaseBdev2 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.186 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.446 BaseBdev3_malloc 00:14:32.446 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.446 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:32.446 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.446 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.446 [2024-12-14 12:40:31.971191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:32.446 [2024-12-14 12:40:31.971241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.446 [2024-12-14 12:40:31.971263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:32.446 [2024-12-14 12:40:31.971274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.446 [2024-12-14 12:40:31.973365] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.446 [2024-12-14 12:40:31.973397] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:32.446 BaseBdev3 00:14:32.446 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.446 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:32.447 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:32.447 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.447 12:40:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.447 BaseBdev4_malloc 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.447 [2024-12-14 12:40:32.023355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:32.447 [2024-12-14 12:40:32.023409] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.447 [2024-12-14 12:40:32.023431] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:32.447 [2024-12-14 12:40:32.023442] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.447 [2024-12-14 12:40:32.025480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.447 [2024-12-14 12:40:32.025516] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:32.447 BaseBdev4 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.447 spare_malloc 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.447 spare_delay 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.447 [2024-12-14 12:40:32.088514] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:32.447 [2024-12-14 12:40:32.088559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.447 [2024-12-14 12:40:32.088576] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:32.447 [2024-12-14 12:40:32.088586] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.447 [2024-12-14 12:40:32.090562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.447 [2024-12-14 12:40:32.090596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:32.447 spare 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.447 [2024-12-14 12:40:32.100540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:32.447 [2024-12-14 12:40:32.102240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:32.447 [2024-12-14 12:40:32.102316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:32.447 [2024-12-14 12:40:32.102364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:32.447 [2024-12-14 12:40:32.102590] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:32.447 [2024-12-14 12:40:32.102608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:32.447 [2024-12-14 12:40:32.102848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:32.447 [2024-12-14 12:40:32.103025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:32.447 [2024-12-14 12:40:32.103050] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:32.447 [2024-12-14 12:40:32.103201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.447 "name": "raid_bdev1", 00:14:32.447 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:32.447 "strip_size_kb": 0, 00:14:32.447 "state": "online", 00:14:32.447 "raid_level": "raid1", 00:14:32.447 "superblock": true, 00:14:32.447 "num_base_bdevs": 4, 00:14:32.447 "num_base_bdevs_discovered": 4, 00:14:32.447 "num_base_bdevs_operational": 4, 00:14:32.447 "base_bdevs_list": [ 00:14:32.447 { 00:14:32.447 "name": "BaseBdev1", 00:14:32.447 "uuid": "718e3281-6495-5342-be2a-8a6b92e67d10", 00:14:32.447 "is_configured": true, 00:14:32.447 "data_offset": 2048, 00:14:32.447 "data_size": 63488 00:14:32.447 }, 00:14:32.447 { 00:14:32.447 "name": "BaseBdev2", 00:14:32.447 "uuid": "9b044b66-9e4e-5dc6-9071-73d614eae319", 00:14:32.447 "is_configured": true, 00:14:32.447 "data_offset": 2048, 00:14:32.447 "data_size": 63488 00:14:32.447 }, 00:14:32.447 { 00:14:32.447 "name": "BaseBdev3", 00:14:32.447 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:32.447 "is_configured": true, 00:14:32.447 "data_offset": 2048, 00:14:32.447 "data_size": 63488 00:14:32.447 }, 00:14:32.447 { 00:14:32.447 "name": "BaseBdev4", 00:14:32.447 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:32.447 "is_configured": true, 00:14:32.447 "data_offset": 2048, 00:14:32.447 "data_size": 63488 00:14:32.447 } 00:14:32.447 ] 00:14:32.447 }' 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.447 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.017 [2024-12-14 12:40:32.596032] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.017 [2024-12-14 12:40:32.675544] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.017 "name": "raid_bdev1", 00:14:33.017 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:33.017 "strip_size_kb": 0, 00:14:33.017 "state": "online", 00:14:33.017 "raid_level": "raid1", 00:14:33.017 "superblock": true, 00:14:33.017 "num_base_bdevs": 4, 00:14:33.017 "num_base_bdevs_discovered": 3, 00:14:33.017 "num_base_bdevs_operational": 3, 00:14:33.017 "base_bdevs_list": [ 00:14:33.017 { 00:14:33.017 "name": null, 00:14:33.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.017 "is_configured": false, 00:14:33.017 "data_offset": 0, 00:14:33.017 "data_size": 63488 00:14:33.017 }, 00:14:33.017 { 00:14:33.017 "name": "BaseBdev2", 00:14:33.017 "uuid": "9b044b66-9e4e-5dc6-9071-73d614eae319", 00:14:33.017 "is_configured": true, 00:14:33.017 "data_offset": 2048, 00:14:33.017 "data_size": 63488 00:14:33.017 }, 00:14:33.017 { 00:14:33.017 "name": "BaseBdev3", 00:14:33.017 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:33.017 "is_configured": true, 00:14:33.017 "data_offset": 2048, 00:14:33.017 "data_size": 63488 00:14:33.017 }, 00:14:33.017 { 00:14:33.017 "name": "BaseBdev4", 00:14:33.017 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:33.017 "is_configured": true, 00:14:33.017 "data_offset": 2048, 00:14:33.017 "data_size": 63488 00:14:33.017 } 00:14:33.017 ] 00:14:33.017 }' 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.017 12:40:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.277 [2024-12-14 12:40:32.774443] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:33.277 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:33.277 Zero copy mechanism will not be used. 00:14:33.277 Running I/O for 60 seconds... 00:14:33.537 12:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:33.537 12:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.537 12:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:33.537 [2024-12-14 12:40:33.156886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:33.537 12:40:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.537 12:40:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:33.537 [2024-12-14 12:40:33.213600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:33.537 [2024-12-14 12:40:33.215520] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.797 [2024-12-14 12:40:33.338109] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:33.797 [2024-12-14 12:40:33.338718] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:34.056 [2024-12-14 12:40:33.542107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.056 [2024-12-14 12:40:33.542828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:34.316 176.00 IOPS, 528.00 MiB/s [2024-12-14T12:40:34.054Z] [2024-12-14 12:40:34.007909] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:34.316 [2024-12-14 12:40:34.008604] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:34.575 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.576 "name": "raid_bdev1", 00:14:34.576 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:34.576 "strip_size_kb": 0, 00:14:34.576 "state": "online", 00:14:34.576 "raid_level": "raid1", 00:14:34.576 "superblock": true, 00:14:34.576 "num_base_bdevs": 4, 00:14:34.576 "num_base_bdevs_discovered": 4, 00:14:34.576 "num_base_bdevs_operational": 4, 00:14:34.576 "process": { 00:14:34.576 "type": "rebuild", 00:14:34.576 "target": "spare", 00:14:34.576 "progress": { 00:14:34.576 "blocks": 10240, 00:14:34.576 "percent": 16 00:14:34.576 } 00:14:34.576 }, 00:14:34.576 "base_bdevs_list": [ 00:14:34.576 { 00:14:34.576 "name": "spare", 00:14:34.576 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:34.576 "is_configured": true, 00:14:34.576 "data_offset": 2048, 00:14:34.576 "data_size": 63488 00:14:34.576 }, 00:14:34.576 { 00:14:34.576 "name": "BaseBdev2", 00:14:34.576 "uuid": "9b044b66-9e4e-5dc6-9071-73d614eae319", 00:14:34.576 "is_configured": true, 00:14:34.576 "data_offset": 2048, 00:14:34.576 "data_size": 63488 00:14:34.576 }, 00:14:34.576 { 00:14:34.576 "name": "BaseBdev3", 00:14:34.576 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:34.576 "is_configured": true, 00:14:34.576 "data_offset": 2048, 00:14:34.576 "data_size": 63488 00:14:34.576 }, 00:14:34.576 { 00:14:34.576 "name": "BaseBdev4", 00:14:34.576 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:34.576 "is_configured": true, 00:14:34.576 "data_offset": 2048, 00:14:34.576 "data_size": 63488 00:14:34.576 } 00:14:34.576 ] 00:14:34.576 }' 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:34.576 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.836 [2024-12-14 12:40:34.349717] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.836 [2024-12-14 12:40:34.349824] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:34.836 [2024-12-14 12:40:34.350296] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:34.836 [2024-12-14 12:40:34.452447] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:34.836 [2024-12-14 12:40:34.461757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.836 [2024-12-14 12:40:34.461805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:34.836 [2024-12-14 12:40:34.461821] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:34.836 [2024-12-14 12:40:34.489879] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.836 "name": "raid_bdev1", 00:14:34.836 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:34.836 "strip_size_kb": 0, 00:14:34.836 "state": "online", 00:14:34.836 "raid_level": "raid1", 00:14:34.836 "superblock": true, 00:14:34.836 "num_base_bdevs": 4, 00:14:34.836 "num_base_bdevs_discovered": 3, 00:14:34.836 "num_base_bdevs_operational": 3, 00:14:34.836 "base_bdevs_list": [ 00:14:34.836 { 00:14:34.836 "name": null, 00:14:34.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.836 "is_configured": false, 00:14:34.836 "data_offset": 0, 00:14:34.836 "data_size": 63488 00:14:34.836 }, 00:14:34.836 { 00:14:34.836 "name": "BaseBdev2", 00:14:34.836 "uuid": "9b044b66-9e4e-5dc6-9071-73d614eae319", 00:14:34.836 "is_configured": true, 00:14:34.836 "data_offset": 2048, 00:14:34.836 "data_size": 63488 00:14:34.836 }, 00:14:34.836 { 00:14:34.836 "name": "BaseBdev3", 00:14:34.836 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:34.836 "is_configured": true, 00:14:34.836 "data_offset": 2048, 00:14:34.836 "data_size": 63488 00:14:34.836 }, 00:14:34.836 { 00:14:34.836 "name": "BaseBdev4", 00:14:34.836 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:34.836 "is_configured": true, 00:14:34.836 "data_offset": 2048, 00:14:34.836 "data_size": 63488 00:14:34.836 } 00:14:34.836 ] 00:14:34.836 }' 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.836 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.356 144.50 IOPS, 433.50 MiB/s [2024-12-14T12:40:35.094Z] 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.356 12:40:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.356 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.356 "name": "raid_bdev1", 00:14:35.356 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:35.356 "strip_size_kb": 0, 00:14:35.356 "state": "online", 00:14:35.356 "raid_level": "raid1", 00:14:35.356 "superblock": true, 00:14:35.356 "num_base_bdevs": 4, 00:14:35.356 "num_base_bdevs_discovered": 3, 00:14:35.356 "num_base_bdevs_operational": 3, 00:14:35.356 "base_bdevs_list": [ 00:14:35.356 { 00:14:35.356 "name": null, 00:14:35.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.356 "is_configured": false, 00:14:35.356 "data_offset": 0, 00:14:35.356 "data_size": 63488 00:14:35.356 }, 00:14:35.356 { 00:14:35.356 "name": "BaseBdev2", 00:14:35.356 "uuid": "9b044b66-9e4e-5dc6-9071-73d614eae319", 00:14:35.356 "is_configured": true, 00:14:35.356 "data_offset": 2048, 00:14:35.356 "data_size": 63488 00:14:35.356 }, 00:14:35.356 { 00:14:35.356 "name": "BaseBdev3", 00:14:35.356 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:35.356 "is_configured": true, 00:14:35.356 "data_offset": 2048, 00:14:35.356 "data_size": 63488 00:14:35.356 }, 00:14:35.356 { 00:14:35.356 "name": "BaseBdev4", 00:14:35.356 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:35.356 "is_configured": true, 00:14:35.356 "data_offset": 2048, 00:14:35.356 "data_size": 63488 00:14:35.356 } 00:14:35.356 ] 00:14:35.356 }' 00:14:35.356 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.356 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:35.356 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.615 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:35.615 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:35.615 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.615 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.615 [2024-12-14 12:40:35.111414] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.615 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.615 12:40:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:35.615 [2024-12-14 12:40:35.177801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:35.615 [2024-12-14 12:40:35.179780] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.615 [2024-12-14 12:40:35.289396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:35.615 [2024-12-14 12:40:35.290845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.186 162.67 IOPS, 488.00 MiB/s [2024-12-14T12:40:35.924Z] [2024-12-14 12:40:35.778885] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:36.186 [2024-12-14 12:40:35.779471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:36.446 [2024-12-14 12:40:35.982815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:36.446 [2024-12-14 12:40:35.983092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.446 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.705 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.705 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.705 "name": "raid_bdev1", 00:14:36.706 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:36.706 "strip_size_kb": 0, 00:14:36.706 "state": "online", 00:14:36.706 "raid_level": "raid1", 00:14:36.706 "superblock": true, 00:14:36.706 "num_base_bdevs": 4, 00:14:36.706 "num_base_bdevs_discovered": 4, 00:14:36.706 "num_base_bdevs_operational": 4, 00:14:36.706 "process": { 00:14:36.706 "type": "rebuild", 00:14:36.706 "target": "spare", 00:14:36.706 "progress": { 00:14:36.706 "blocks": 12288, 00:14:36.706 "percent": 19 00:14:36.706 } 00:14:36.706 }, 00:14:36.706 "base_bdevs_list": [ 00:14:36.706 { 00:14:36.706 "name": "spare", 00:14:36.706 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:36.706 "is_configured": true, 00:14:36.706 "data_offset": 2048, 00:14:36.706 "data_size": 63488 00:14:36.706 }, 00:14:36.706 { 00:14:36.706 "name": "BaseBdev2", 00:14:36.706 "uuid": "9b044b66-9e4e-5dc6-9071-73d614eae319", 00:14:36.706 "is_configured": true, 00:14:36.706 "data_offset": 2048, 00:14:36.706 "data_size": 63488 00:14:36.706 }, 00:14:36.706 { 00:14:36.706 "name": "BaseBdev3", 00:14:36.706 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:36.706 "is_configured": true, 00:14:36.706 "data_offset": 2048, 00:14:36.706 "data_size": 63488 00:14:36.706 }, 00:14:36.706 { 00:14:36.706 "name": "BaseBdev4", 00:14:36.706 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:36.706 "is_configured": true, 00:14:36.706 "data_offset": 2048, 00:14:36.706 "data_size": 63488 00:14:36.706 } 00:14:36.706 ] 00:14:36.706 }' 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:36.706 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.706 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.706 [2024-12-14 12:40:36.331353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:36.966 [2024-12-14 12:40:36.531231] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:36.966 [2024-12-14 12:40:36.531277] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.966 "name": "raid_bdev1", 00:14:36.966 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:36.966 "strip_size_kb": 0, 00:14:36.966 "state": "online", 00:14:36.966 "raid_level": "raid1", 00:14:36.966 "superblock": true, 00:14:36.966 "num_base_bdevs": 4, 00:14:36.966 "num_base_bdevs_discovered": 3, 00:14:36.966 "num_base_bdevs_operational": 3, 00:14:36.966 "process": { 00:14:36.966 "type": "rebuild", 00:14:36.966 "target": "spare", 00:14:36.966 "progress": { 00:14:36.966 "blocks": 16384, 00:14:36.966 "percent": 25 00:14:36.966 } 00:14:36.966 }, 00:14:36.966 "base_bdevs_list": [ 00:14:36.966 { 00:14:36.966 "name": "spare", 00:14:36.966 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:36.966 "is_configured": true, 00:14:36.966 "data_offset": 2048, 00:14:36.966 "data_size": 63488 00:14:36.966 }, 00:14:36.966 { 00:14:36.966 "name": null, 00:14:36.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.966 "is_configured": false, 00:14:36.966 "data_offset": 0, 00:14:36.966 "data_size": 63488 00:14:36.966 }, 00:14:36.966 { 00:14:36.966 "name": "BaseBdev3", 00:14:36.966 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:36.966 "is_configured": true, 00:14:36.966 "data_offset": 2048, 00:14:36.966 "data_size": 63488 00:14:36.966 }, 00:14:36.966 { 00:14:36.966 "name": "BaseBdev4", 00:14:36.966 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:36.966 "is_configured": true, 00:14:36.966 "data_offset": 2048, 00:14:36.966 "data_size": 63488 00:14:36.966 } 00:14:36.966 ] 00:14:36.966 }' 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=491 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.966 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.226 "name": "raid_bdev1", 00:14:37.226 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:37.226 "strip_size_kb": 0, 00:14:37.226 "state": "online", 00:14:37.226 "raid_level": "raid1", 00:14:37.226 "superblock": true, 00:14:37.226 "num_base_bdevs": 4, 00:14:37.226 "num_base_bdevs_discovered": 3, 00:14:37.226 "num_base_bdevs_operational": 3, 00:14:37.226 "process": { 00:14:37.226 "type": "rebuild", 00:14:37.226 "target": "spare", 00:14:37.226 "progress": { 00:14:37.226 "blocks": 18432, 00:14:37.226 "percent": 29 00:14:37.226 } 00:14:37.226 }, 00:14:37.226 "base_bdevs_list": [ 00:14:37.226 { 00:14:37.226 "name": "spare", 00:14:37.226 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:37.226 "is_configured": true, 00:14:37.226 "data_offset": 2048, 00:14:37.226 "data_size": 63488 00:14:37.226 }, 00:14:37.226 { 00:14:37.226 "name": null, 00:14:37.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.226 "is_configured": false, 00:14:37.226 "data_offset": 0, 00:14:37.226 "data_size": 63488 00:14:37.226 }, 00:14:37.226 { 00:14:37.226 "name": "BaseBdev3", 00:14:37.226 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:37.226 "is_configured": true, 00:14:37.226 "data_offset": 2048, 00:14:37.226 "data_size": 63488 00:14:37.226 }, 00:14:37.226 { 00:14:37.226 "name": "BaseBdev4", 00:14:37.226 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:37.226 "is_configured": true, 00:14:37.226 "data_offset": 2048, 00:14:37.226 "data_size": 63488 00:14:37.226 } 00:14:37.226 ] 00:14:37.226 }' 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.226 138.25 IOPS, 414.75 MiB/s [2024-12-14T12:40:36.964Z] 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.226 12:40:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.226 [2024-12-14 12:40:36.896128] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:37.794 [2024-12-14 12:40:37.360651] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:37.794 [2024-12-14 12:40:37.360900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:38.053 [2024-12-14 12:40:37.598538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:38.053 [2024-12-14 12:40:37.714256] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:14:38.313 121.40 IOPS, 364.20 MiB/s [2024-12-14T12:40:38.051Z] 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.313 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.313 "name": "raid_bdev1", 00:14:38.313 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:38.313 "strip_size_kb": 0, 00:14:38.313 "state": "online", 00:14:38.313 "raid_level": "raid1", 00:14:38.313 "superblock": true, 00:14:38.313 "num_base_bdevs": 4, 00:14:38.313 "num_base_bdevs_discovered": 3, 00:14:38.313 "num_base_bdevs_operational": 3, 00:14:38.313 "process": { 00:14:38.313 "type": "rebuild", 00:14:38.313 "target": "spare", 00:14:38.313 "progress": { 00:14:38.313 "blocks": 34816, 00:14:38.313 "percent": 54 00:14:38.313 } 00:14:38.313 }, 00:14:38.313 "base_bdevs_list": [ 00:14:38.313 { 00:14:38.313 "name": "spare", 00:14:38.314 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:38.314 "is_configured": true, 00:14:38.314 "data_offset": 2048, 00:14:38.314 "data_size": 63488 00:14:38.314 }, 00:14:38.314 { 00:14:38.314 "name": null, 00:14:38.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.314 "is_configured": false, 00:14:38.314 "data_offset": 0, 00:14:38.314 "data_size": 63488 00:14:38.314 }, 00:14:38.314 { 00:14:38.314 "name": "BaseBdev3", 00:14:38.314 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:38.314 "is_configured": true, 00:14:38.314 "data_offset": 2048, 00:14:38.314 "data_size": 63488 00:14:38.314 }, 00:14:38.314 { 00:14:38.314 "name": "BaseBdev4", 00:14:38.314 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:38.314 "is_configured": true, 00:14:38.314 "data_offset": 2048, 00:14:38.314 "data_size": 63488 00:14:38.314 } 00:14:38.314 ] 00:14:38.314 }' 00:14:38.314 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.314 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:38.314 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.314 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.314 12:40:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:38.573 [2024-12-14 12:40:38.072578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:14:38.573 [2024-12-14 12:40:38.175500] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:38.573 [2024-12-14 12:40:38.175730] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:39.142 110.17 IOPS, 330.50 MiB/s [2024-12-14T12:40:38.881Z] [2024-12-14 12:40:38.846405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:39.402 12:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.402 12:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.403 12:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.403 12:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.403 12:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.403 12:40:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.403 "name": "raid_bdev1", 00:14:39.403 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:39.403 "strip_size_kb": 0, 00:14:39.403 "state": "online", 00:14:39.403 "raid_level": "raid1", 00:14:39.403 "superblock": true, 00:14:39.403 "num_base_bdevs": 4, 00:14:39.403 "num_base_bdevs_discovered": 3, 00:14:39.403 "num_base_bdevs_operational": 3, 00:14:39.403 "process": { 00:14:39.403 "type": "rebuild", 00:14:39.403 "target": "spare", 00:14:39.403 "progress": { 00:14:39.403 "blocks": 53248, 00:14:39.403 "percent": 83 00:14:39.403 } 00:14:39.403 }, 00:14:39.403 "base_bdevs_list": [ 00:14:39.403 { 00:14:39.403 "name": "spare", 00:14:39.403 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:39.403 "is_configured": true, 00:14:39.403 "data_offset": 2048, 00:14:39.403 "data_size": 63488 00:14:39.403 }, 00:14:39.403 { 00:14:39.403 "name": null, 00:14:39.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.403 "is_configured": false, 00:14:39.403 "data_offset": 0, 00:14:39.403 "data_size": 63488 00:14:39.403 }, 00:14:39.403 { 00:14:39.403 "name": "BaseBdev3", 00:14:39.403 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:39.403 "is_configured": true, 00:14:39.403 "data_offset": 2048, 00:14:39.403 "data_size": 63488 00:14:39.403 }, 00:14:39.403 { 00:14:39.403 "name": "BaseBdev4", 00:14:39.403 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:39.403 "is_configured": true, 00:14:39.403 "data_offset": 2048, 00:14:39.403 "data_size": 63488 00:14:39.403 } 00:14:39.403 ] 00:14:39.403 }' 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.403 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.663 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.663 12:40:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:39.663 [2024-12-14 12:40:39.284376] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:39.922 [2024-12-14 12:40:39.508129] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:39.922 [2024-12-14 12:40:39.607923] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:39.922 [2024-12-14 12:40:39.610662] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.441 99.71 IOPS, 299.14 MiB/s [2024-12-14T12:40:40.179Z] 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.441 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.441 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.441 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.441 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.441 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.442 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.442 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.442 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.442 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.706 "name": "raid_bdev1", 00:14:40.706 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:40.706 "strip_size_kb": 0, 00:14:40.706 "state": "online", 00:14:40.706 "raid_level": "raid1", 00:14:40.706 "superblock": true, 00:14:40.706 "num_base_bdevs": 4, 00:14:40.706 "num_base_bdevs_discovered": 3, 00:14:40.706 "num_base_bdevs_operational": 3, 00:14:40.706 "base_bdevs_list": [ 00:14:40.706 { 00:14:40.706 "name": "spare", 00:14:40.706 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:40.706 "is_configured": true, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": null, 00:14:40.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.706 "is_configured": false, 00:14:40.706 "data_offset": 0, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": "BaseBdev3", 00:14:40.706 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:40.706 "is_configured": true, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": "BaseBdev4", 00:14:40.706 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:40.706 "is_configured": true, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 } 00:14:40.706 ] 00:14:40.706 }' 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.706 "name": "raid_bdev1", 00:14:40.706 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:40.706 "strip_size_kb": 0, 00:14:40.706 "state": "online", 00:14:40.706 "raid_level": "raid1", 00:14:40.706 "superblock": true, 00:14:40.706 "num_base_bdevs": 4, 00:14:40.706 "num_base_bdevs_discovered": 3, 00:14:40.706 "num_base_bdevs_operational": 3, 00:14:40.706 "base_bdevs_list": [ 00:14:40.706 { 00:14:40.706 "name": "spare", 00:14:40.706 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:40.706 "is_configured": true, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": null, 00:14:40.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.706 "is_configured": false, 00:14:40.706 "data_offset": 0, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": "BaseBdev3", 00:14:40.706 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:40.706 "is_configured": true, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 }, 00:14:40.706 { 00:14:40.706 "name": "BaseBdev4", 00:14:40.706 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:40.706 "is_configured": true, 00:14:40.706 "data_offset": 2048, 00:14:40.706 "data_size": 63488 00:14:40.706 } 00:14:40.706 ] 00:14:40.706 }' 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.706 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.707 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.967 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.967 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.967 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.967 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.967 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.967 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.967 "name": "raid_bdev1", 00:14:40.967 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:40.967 "strip_size_kb": 0, 00:14:40.967 "state": "online", 00:14:40.967 "raid_level": "raid1", 00:14:40.967 "superblock": true, 00:14:40.967 "num_base_bdevs": 4, 00:14:40.967 "num_base_bdevs_discovered": 3, 00:14:40.967 "num_base_bdevs_operational": 3, 00:14:40.967 "base_bdevs_list": [ 00:14:40.967 { 00:14:40.967 "name": "spare", 00:14:40.967 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:40.967 "is_configured": true, 00:14:40.967 "data_offset": 2048, 00:14:40.967 "data_size": 63488 00:14:40.967 }, 00:14:40.967 { 00:14:40.967 "name": null, 00:14:40.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.967 "is_configured": false, 00:14:40.967 "data_offset": 0, 00:14:40.967 "data_size": 63488 00:14:40.967 }, 00:14:40.967 { 00:14:40.967 "name": "BaseBdev3", 00:14:40.967 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:40.967 "is_configured": true, 00:14:40.967 "data_offset": 2048, 00:14:40.967 "data_size": 63488 00:14:40.967 }, 00:14:40.967 { 00:14:40.967 "name": "BaseBdev4", 00:14:40.967 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:40.967 "is_configured": true, 00:14:40.967 "data_offset": 2048, 00:14:40.967 "data_size": 63488 00:14:40.967 } 00:14:40.967 ] 00:14:40.967 }' 00:14:40.967 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.967 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.227 91.00 IOPS, 273.00 MiB/s [2024-12-14T12:40:40.965Z] 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:41.227 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.227 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.227 [2024-12-14 12:40:40.868343] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:41.227 [2024-12-14 12:40:40.868386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.227 00:14:41.227 Latency(us) 00:14:41.227 [2024-12-14T12:40:40.965Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:41.227 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:41.227 raid_bdev1 : 8.16 89.71 269.14 0.00 0.00 15326.41 332.69 116762.83 00:14:41.227 [2024-12-14T12:40:40.965Z] =================================================================================================================== 00:14:41.227 [2024-12-14T12:40:40.965Z] Total : 89.71 269.14 0.00 0.00 15326.41 332.69 116762.83 00:14:41.227 [2024-12-14 12:40:40.941329] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.227 [2024-12-14 12:40:40.941401] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:41.227 [2024-12-14 12:40:40.941495] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:41.227 [2024-12-14 12:40:40.941507] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:41.227 { 00:14:41.227 "results": [ 00:14:41.227 { 00:14:41.227 "job": "raid_bdev1", 00:14:41.227 "core_mask": "0x1", 00:14:41.227 "workload": "randrw", 00:14:41.227 "percentage": 50, 00:14:41.227 "status": "finished", 00:14:41.227 "queue_depth": 2, 00:14:41.227 "io_size": 3145728, 00:14:41.227 "runtime": 8.159453, 00:14:41.227 "iops": 89.71189612833115, 00:14:41.227 "mibps": 269.13568838499344, 00:14:41.227 "io_failed": 0, 00:14:41.227 "io_timeout": 0, 00:14:41.227 "avg_latency_us": 15326.40790321426, 00:14:41.227 "min_latency_us": 332.6882096069869, 00:14:41.227 "max_latency_us": 116762.82969432314 00:14:41.227 } 00:14:41.227 ], 00:14:41.227 "core_count": 1 00:14:41.227 } 00:14:41.227 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.227 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.227 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.227 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:41.227 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:41.227 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:41.487 12:40:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:41.487 /dev/nbd0 00:14:41.746 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.747 1+0 records in 00:14:41.747 1+0 records out 00:14:41.747 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035244 s, 11.6 MB/s 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:41.747 /dev/nbd1 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:41.747 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.012 1+0 records in 00:14:42.012 1+0 records out 00:14:42.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145002 s, 2.8 MB/s 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.012 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.286 12:40:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:42.563 /dev/nbd1 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.563 1+0 records in 00:14:42.563 1+0 records out 00:14:42.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000320258 s, 12.8 MB/s 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.563 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:42.838 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.098 [2024-12-14 12:40:42.656168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:43.098 [2024-12-14 12:40:42.656227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.098 [2024-12-14 12:40:42.656250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:43.098 [2024-12-14 12:40:42.656261] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.098 [2024-12-14 12:40:42.658457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.098 [2024-12-14 12:40:42.658498] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:43.098 [2024-12-14 12:40:42.658587] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:43.098 [2024-12-14 12:40:42.658645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.098 [2024-12-14 12:40:42.658780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:43.098 [2024-12-14 12:40:42.658880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:43.098 spare 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.098 [2024-12-14 12:40:42.758772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:43.098 [2024-12-14 12:40:42.758806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:43.098 [2024-12-14 12:40:42.759116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:43.098 [2024-12-14 12:40:42.759312] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:43.098 [2024-12-14 12:40:42.759327] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:43.098 [2024-12-14 12:40:42.759506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.098 "name": "raid_bdev1", 00:14:43.098 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:43.098 "strip_size_kb": 0, 00:14:43.098 "state": "online", 00:14:43.098 "raid_level": "raid1", 00:14:43.098 "superblock": true, 00:14:43.098 "num_base_bdevs": 4, 00:14:43.098 "num_base_bdevs_discovered": 3, 00:14:43.098 "num_base_bdevs_operational": 3, 00:14:43.098 "base_bdevs_list": [ 00:14:43.098 { 00:14:43.098 "name": "spare", 00:14:43.098 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:43.098 "is_configured": true, 00:14:43.098 "data_offset": 2048, 00:14:43.098 "data_size": 63488 00:14:43.098 }, 00:14:43.098 { 00:14:43.098 "name": null, 00:14:43.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.098 "is_configured": false, 00:14:43.098 "data_offset": 2048, 00:14:43.098 "data_size": 63488 00:14:43.098 }, 00:14:43.098 { 00:14:43.098 "name": "BaseBdev3", 00:14:43.098 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:43.098 "is_configured": true, 00:14:43.098 "data_offset": 2048, 00:14:43.098 "data_size": 63488 00:14:43.098 }, 00:14:43.098 { 00:14:43.098 "name": "BaseBdev4", 00:14:43.098 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:43.098 "is_configured": true, 00:14:43.098 "data_offset": 2048, 00:14:43.098 "data_size": 63488 00:14:43.098 } 00:14:43.098 ] 00:14:43.098 }' 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.098 12:40:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.668 "name": "raid_bdev1", 00:14:43.668 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:43.668 "strip_size_kb": 0, 00:14:43.668 "state": "online", 00:14:43.668 "raid_level": "raid1", 00:14:43.668 "superblock": true, 00:14:43.668 "num_base_bdevs": 4, 00:14:43.668 "num_base_bdevs_discovered": 3, 00:14:43.668 "num_base_bdevs_operational": 3, 00:14:43.668 "base_bdevs_list": [ 00:14:43.668 { 00:14:43.668 "name": "spare", 00:14:43.668 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:43.668 "is_configured": true, 00:14:43.668 "data_offset": 2048, 00:14:43.668 "data_size": 63488 00:14:43.668 }, 00:14:43.668 { 00:14:43.668 "name": null, 00:14:43.668 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.668 "is_configured": false, 00:14:43.668 "data_offset": 2048, 00:14:43.668 "data_size": 63488 00:14:43.668 }, 00:14:43.668 { 00:14:43.668 "name": "BaseBdev3", 00:14:43.668 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:43.668 "is_configured": true, 00:14:43.668 "data_offset": 2048, 00:14:43.668 "data_size": 63488 00:14:43.668 }, 00:14:43.668 { 00:14:43.668 "name": "BaseBdev4", 00:14:43.668 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:43.668 "is_configured": true, 00:14:43.668 "data_offset": 2048, 00:14:43.668 "data_size": 63488 00:14:43.668 } 00:14:43.668 ] 00:14:43.668 }' 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.668 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.928 [2024-12-14 12:40:43.407036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.928 "name": "raid_bdev1", 00:14:43.928 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:43.928 "strip_size_kb": 0, 00:14:43.928 "state": "online", 00:14:43.928 "raid_level": "raid1", 00:14:43.928 "superblock": true, 00:14:43.928 "num_base_bdevs": 4, 00:14:43.928 "num_base_bdevs_discovered": 2, 00:14:43.928 "num_base_bdevs_operational": 2, 00:14:43.928 "base_bdevs_list": [ 00:14:43.928 { 00:14:43.928 "name": null, 00:14:43.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.928 "is_configured": false, 00:14:43.928 "data_offset": 0, 00:14:43.928 "data_size": 63488 00:14:43.928 }, 00:14:43.928 { 00:14:43.928 "name": null, 00:14:43.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.928 "is_configured": false, 00:14:43.928 "data_offset": 2048, 00:14:43.928 "data_size": 63488 00:14:43.928 }, 00:14:43.928 { 00:14:43.928 "name": "BaseBdev3", 00:14:43.928 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:43.928 "is_configured": true, 00:14:43.928 "data_offset": 2048, 00:14:43.928 "data_size": 63488 00:14:43.928 }, 00:14:43.928 { 00:14:43.928 "name": "BaseBdev4", 00:14:43.928 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:43.928 "is_configured": true, 00:14:43.928 "data_offset": 2048, 00:14:43.928 "data_size": 63488 00:14:43.928 } 00:14:43.928 ] 00:14:43.928 }' 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.928 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.187 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.187 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.187 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:44.187 [2024-12-14 12:40:43.830393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.187 [2024-12-14 12:40:43.830619] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:44.187 [2024-12-14 12:40:43.830646] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:44.187 [2024-12-14 12:40:43.830680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.187 [2024-12-14 12:40:43.845010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:44.188 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.188 12:40:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:44.188 [2024-12-14 12:40:43.846862] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.126 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.386 "name": "raid_bdev1", 00:14:45.386 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:45.386 "strip_size_kb": 0, 00:14:45.386 "state": "online", 00:14:45.386 "raid_level": "raid1", 00:14:45.386 "superblock": true, 00:14:45.386 "num_base_bdevs": 4, 00:14:45.386 "num_base_bdevs_discovered": 3, 00:14:45.386 "num_base_bdevs_operational": 3, 00:14:45.386 "process": { 00:14:45.386 "type": "rebuild", 00:14:45.386 "target": "spare", 00:14:45.386 "progress": { 00:14:45.386 "blocks": 20480, 00:14:45.386 "percent": 32 00:14:45.386 } 00:14:45.386 }, 00:14:45.386 "base_bdevs_list": [ 00:14:45.386 { 00:14:45.386 "name": "spare", 00:14:45.386 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:45.386 "is_configured": true, 00:14:45.386 "data_offset": 2048, 00:14:45.386 "data_size": 63488 00:14:45.386 }, 00:14:45.386 { 00:14:45.386 "name": null, 00:14:45.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.386 "is_configured": false, 00:14:45.386 "data_offset": 2048, 00:14:45.386 "data_size": 63488 00:14:45.386 }, 00:14:45.386 { 00:14:45.386 "name": "BaseBdev3", 00:14:45.386 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:45.386 "is_configured": true, 00:14:45.386 "data_offset": 2048, 00:14:45.386 "data_size": 63488 00:14:45.386 }, 00:14:45.386 { 00:14:45.386 "name": "BaseBdev4", 00:14:45.386 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:45.386 "is_configured": true, 00:14:45.386 "data_offset": 2048, 00:14:45.386 "data_size": 63488 00:14:45.386 } 00:14:45.386 ] 00:14:45.386 }' 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.386 12:40:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.386 [2024-12-14 12:40:44.991056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.386 [2024-12-14 12:40:45.051887] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:45.386 [2024-12-14 12:40:45.052015] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.386 [2024-12-14 12:40:45.052065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.386 [2024-12-14 12:40:45.052109] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.386 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.645 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.645 "name": "raid_bdev1", 00:14:45.645 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:45.645 "strip_size_kb": 0, 00:14:45.645 "state": "online", 00:14:45.645 "raid_level": "raid1", 00:14:45.645 "superblock": true, 00:14:45.645 "num_base_bdevs": 4, 00:14:45.645 "num_base_bdevs_discovered": 2, 00:14:45.645 "num_base_bdevs_operational": 2, 00:14:45.645 "base_bdevs_list": [ 00:14:45.645 { 00:14:45.645 "name": null, 00:14:45.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.645 "is_configured": false, 00:14:45.645 "data_offset": 0, 00:14:45.645 "data_size": 63488 00:14:45.645 }, 00:14:45.645 { 00:14:45.645 "name": null, 00:14:45.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.645 "is_configured": false, 00:14:45.645 "data_offset": 2048, 00:14:45.645 "data_size": 63488 00:14:45.645 }, 00:14:45.645 { 00:14:45.645 "name": "BaseBdev3", 00:14:45.645 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:45.645 "is_configured": true, 00:14:45.645 "data_offset": 2048, 00:14:45.645 "data_size": 63488 00:14:45.645 }, 00:14:45.645 { 00:14:45.646 "name": "BaseBdev4", 00:14:45.646 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:45.646 "is_configured": true, 00:14:45.646 "data_offset": 2048, 00:14:45.646 "data_size": 63488 00:14:45.646 } 00:14:45.646 ] 00:14:45.646 }' 00:14:45.646 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.646 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.904 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.904 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.904 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.904 [2024-12-14 12:40:45.519707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.904 [2024-12-14 12:40:45.519835] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.904 [2024-12-14 12:40:45.519883] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:45.904 [2024-12-14 12:40:45.519915] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.904 [2024-12-14 12:40:45.520439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.904 [2024-12-14 12:40:45.520502] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.904 [2024-12-14 12:40:45.520640] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:45.904 [2024-12-14 12:40:45.520684] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:45.904 [2024-12-14 12:40:45.520727] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:45.904 [2024-12-14 12:40:45.520803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.904 [2024-12-14 12:40:45.535257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:45.904 spare 00:14:45.904 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.904 [2024-12-14 12:40:45.537106] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.904 12:40:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.842 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.102 "name": "raid_bdev1", 00:14:47.102 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:47.102 "strip_size_kb": 0, 00:14:47.102 "state": "online", 00:14:47.102 "raid_level": "raid1", 00:14:47.102 "superblock": true, 00:14:47.102 "num_base_bdevs": 4, 00:14:47.102 "num_base_bdevs_discovered": 3, 00:14:47.102 "num_base_bdevs_operational": 3, 00:14:47.102 "process": { 00:14:47.102 "type": "rebuild", 00:14:47.102 "target": "spare", 00:14:47.102 "progress": { 00:14:47.102 "blocks": 20480, 00:14:47.102 "percent": 32 00:14:47.102 } 00:14:47.102 }, 00:14:47.102 "base_bdevs_list": [ 00:14:47.102 { 00:14:47.102 "name": "spare", 00:14:47.102 "uuid": "ee49cd1f-ac6b-59a8-b119-a8dacedfad22", 00:14:47.102 "is_configured": true, 00:14:47.102 "data_offset": 2048, 00:14:47.102 "data_size": 63488 00:14:47.102 }, 00:14:47.102 { 00:14:47.102 "name": null, 00:14:47.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.102 "is_configured": false, 00:14:47.102 "data_offset": 2048, 00:14:47.102 "data_size": 63488 00:14:47.102 }, 00:14:47.102 { 00:14:47.102 "name": "BaseBdev3", 00:14:47.102 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:47.102 "is_configured": true, 00:14:47.102 "data_offset": 2048, 00:14:47.102 "data_size": 63488 00:14:47.102 }, 00:14:47.102 { 00:14:47.102 "name": "BaseBdev4", 00:14:47.102 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:47.102 "is_configured": true, 00:14:47.102 "data_offset": 2048, 00:14:47.102 "data_size": 63488 00:14:47.102 } 00:14:47.102 ] 00:14:47.102 }' 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.102 [2024-12-14 12:40:46.676725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.102 [2024-12-14 12:40:46.742281] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:47.102 [2024-12-14 12:40:46.742336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.102 [2024-12-14 12:40:46.742373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.102 [2024-12-14 12:40:46.742380] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.102 "name": "raid_bdev1", 00:14:47.102 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:47.102 "strip_size_kb": 0, 00:14:47.102 "state": "online", 00:14:47.102 "raid_level": "raid1", 00:14:47.102 "superblock": true, 00:14:47.102 "num_base_bdevs": 4, 00:14:47.102 "num_base_bdevs_discovered": 2, 00:14:47.102 "num_base_bdevs_operational": 2, 00:14:47.102 "base_bdevs_list": [ 00:14:47.102 { 00:14:47.102 "name": null, 00:14:47.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.102 "is_configured": false, 00:14:47.102 "data_offset": 0, 00:14:47.102 "data_size": 63488 00:14:47.102 }, 00:14:47.102 { 00:14:47.102 "name": null, 00:14:47.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.102 "is_configured": false, 00:14:47.102 "data_offset": 2048, 00:14:47.102 "data_size": 63488 00:14:47.102 }, 00:14:47.102 { 00:14:47.102 "name": "BaseBdev3", 00:14:47.102 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:47.102 "is_configured": true, 00:14:47.102 "data_offset": 2048, 00:14:47.102 "data_size": 63488 00:14:47.102 }, 00:14:47.102 { 00:14:47.102 "name": "BaseBdev4", 00:14:47.102 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:47.102 "is_configured": true, 00:14:47.102 "data_offset": 2048, 00:14:47.102 "data_size": 63488 00:14:47.102 } 00:14:47.102 ] 00:14:47.102 }' 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.102 12:40:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.672 "name": "raid_bdev1", 00:14:47.672 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:47.672 "strip_size_kb": 0, 00:14:47.672 "state": "online", 00:14:47.672 "raid_level": "raid1", 00:14:47.672 "superblock": true, 00:14:47.672 "num_base_bdevs": 4, 00:14:47.672 "num_base_bdevs_discovered": 2, 00:14:47.672 "num_base_bdevs_operational": 2, 00:14:47.672 "base_bdevs_list": [ 00:14:47.672 { 00:14:47.672 "name": null, 00:14:47.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.672 "is_configured": false, 00:14:47.672 "data_offset": 0, 00:14:47.672 "data_size": 63488 00:14:47.672 }, 00:14:47.672 { 00:14:47.672 "name": null, 00:14:47.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.672 "is_configured": false, 00:14:47.672 "data_offset": 2048, 00:14:47.672 "data_size": 63488 00:14:47.672 }, 00:14:47.672 { 00:14:47.672 "name": "BaseBdev3", 00:14:47.672 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:47.672 "is_configured": true, 00:14:47.672 "data_offset": 2048, 00:14:47.672 "data_size": 63488 00:14:47.672 }, 00:14:47.672 { 00:14:47.672 "name": "BaseBdev4", 00:14:47.672 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:47.672 "is_configured": true, 00:14:47.672 "data_offset": 2048, 00:14:47.672 "data_size": 63488 00:14:47.672 } 00:14:47.672 ] 00:14:47.672 }' 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.672 [2024-12-14 12:40:47.397595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:47.672 [2024-12-14 12:40:47.397657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:47.672 [2024-12-14 12:40:47.397685] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:47.672 [2024-12-14 12:40:47.397694] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:47.672 [2024-12-14 12:40:47.398148] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:47.672 [2024-12-14 12:40:47.398166] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:47.672 [2024-12-14 12:40:47.398247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:47.672 [2024-12-14 12:40:47.398262] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:47.672 [2024-12-14 12:40:47.398271] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:47.672 [2024-12-14 12:40:47.398284] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:47.672 BaseBdev1 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.672 12:40:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.052 "name": "raid_bdev1", 00:14:49.052 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:49.052 "strip_size_kb": 0, 00:14:49.052 "state": "online", 00:14:49.052 "raid_level": "raid1", 00:14:49.052 "superblock": true, 00:14:49.052 "num_base_bdevs": 4, 00:14:49.052 "num_base_bdevs_discovered": 2, 00:14:49.052 "num_base_bdevs_operational": 2, 00:14:49.052 "base_bdevs_list": [ 00:14:49.052 { 00:14:49.052 "name": null, 00:14:49.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.052 "is_configured": false, 00:14:49.052 "data_offset": 0, 00:14:49.052 "data_size": 63488 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "name": null, 00:14:49.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.052 "is_configured": false, 00:14:49.052 "data_offset": 2048, 00:14:49.052 "data_size": 63488 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "name": "BaseBdev3", 00:14:49.052 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:49.052 "is_configured": true, 00:14:49.052 "data_offset": 2048, 00:14:49.052 "data_size": 63488 00:14:49.052 }, 00:14:49.052 { 00:14:49.052 "name": "BaseBdev4", 00:14:49.052 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:49.052 "is_configured": true, 00:14:49.052 "data_offset": 2048, 00:14:49.052 "data_size": 63488 00:14:49.052 } 00:14:49.052 ] 00:14:49.052 }' 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.052 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.312 "name": "raid_bdev1", 00:14:49.312 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:49.312 "strip_size_kb": 0, 00:14:49.312 "state": "online", 00:14:49.312 "raid_level": "raid1", 00:14:49.312 "superblock": true, 00:14:49.312 "num_base_bdevs": 4, 00:14:49.312 "num_base_bdevs_discovered": 2, 00:14:49.312 "num_base_bdevs_operational": 2, 00:14:49.312 "base_bdevs_list": [ 00:14:49.312 { 00:14:49.312 "name": null, 00:14:49.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.312 "is_configured": false, 00:14:49.312 "data_offset": 0, 00:14:49.312 "data_size": 63488 00:14:49.312 }, 00:14:49.312 { 00:14:49.312 "name": null, 00:14:49.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.312 "is_configured": false, 00:14:49.312 "data_offset": 2048, 00:14:49.312 "data_size": 63488 00:14:49.312 }, 00:14:49.312 { 00:14:49.312 "name": "BaseBdev3", 00:14:49.312 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:49.312 "is_configured": true, 00:14:49.312 "data_offset": 2048, 00:14:49.312 "data_size": 63488 00:14:49.312 }, 00:14:49.312 { 00:14:49.312 "name": "BaseBdev4", 00:14:49.312 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:49.312 "is_configured": true, 00:14:49.312 "data_offset": 2048, 00:14:49.312 "data_size": 63488 00:14:49.312 } 00:14:49.312 ] 00:14:49.312 }' 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.312 [2024-12-14 12:40:48.959149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:49.312 [2024-12-14 12:40:48.959312] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:49.312 [2024-12-14 12:40:48.959326] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:49.312 request: 00:14:49.312 { 00:14:49.312 "base_bdev": "BaseBdev1", 00:14:49.312 "raid_bdev": "raid_bdev1", 00:14:49.312 "method": "bdev_raid_add_base_bdev", 00:14:49.312 "req_id": 1 00:14:49.312 } 00:14:49.312 Got JSON-RPC error response 00:14:49.312 response: 00:14:49.312 { 00:14:49.312 "code": -22, 00:14:49.312 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:49.312 } 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:49.312 12:40:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.251 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.510 12:40:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.510 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.510 "name": "raid_bdev1", 00:14:50.510 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:50.510 "strip_size_kb": 0, 00:14:50.510 "state": "online", 00:14:50.510 "raid_level": "raid1", 00:14:50.510 "superblock": true, 00:14:50.510 "num_base_bdevs": 4, 00:14:50.510 "num_base_bdevs_discovered": 2, 00:14:50.510 "num_base_bdevs_operational": 2, 00:14:50.510 "base_bdevs_list": [ 00:14:50.510 { 00:14:50.510 "name": null, 00:14:50.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.510 "is_configured": false, 00:14:50.510 "data_offset": 0, 00:14:50.510 "data_size": 63488 00:14:50.510 }, 00:14:50.510 { 00:14:50.510 "name": null, 00:14:50.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.510 "is_configured": false, 00:14:50.510 "data_offset": 2048, 00:14:50.510 "data_size": 63488 00:14:50.510 }, 00:14:50.510 { 00:14:50.510 "name": "BaseBdev3", 00:14:50.510 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:50.510 "is_configured": true, 00:14:50.510 "data_offset": 2048, 00:14:50.510 "data_size": 63488 00:14:50.510 }, 00:14:50.511 { 00:14:50.511 "name": "BaseBdev4", 00:14:50.511 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:50.511 "is_configured": true, 00:14:50.511 "data_offset": 2048, 00:14:50.511 "data_size": 63488 00:14:50.511 } 00:14:50.511 ] 00:14:50.511 }' 00:14:50.511 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.511 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.770 "name": "raid_bdev1", 00:14:50.770 "uuid": "1a375ca5-b754-48e0-8535-73492d1f4bb8", 00:14:50.770 "strip_size_kb": 0, 00:14:50.770 "state": "online", 00:14:50.770 "raid_level": "raid1", 00:14:50.770 "superblock": true, 00:14:50.770 "num_base_bdevs": 4, 00:14:50.770 "num_base_bdevs_discovered": 2, 00:14:50.770 "num_base_bdevs_operational": 2, 00:14:50.770 "base_bdevs_list": [ 00:14:50.770 { 00:14:50.770 "name": null, 00:14:50.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.770 "is_configured": false, 00:14:50.770 "data_offset": 0, 00:14:50.770 "data_size": 63488 00:14:50.770 }, 00:14:50.770 { 00:14:50.770 "name": null, 00:14:50.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.770 "is_configured": false, 00:14:50.770 "data_offset": 2048, 00:14:50.770 "data_size": 63488 00:14:50.770 }, 00:14:50.770 { 00:14:50.770 "name": "BaseBdev3", 00:14:50.770 "uuid": "44bc23e9-bb57-5804-b04d-2e93b32543bc", 00:14:50.770 "is_configured": true, 00:14:50.770 "data_offset": 2048, 00:14:50.770 "data_size": 63488 00:14:50.770 }, 00:14:50.770 { 00:14:50.770 "name": "BaseBdev4", 00:14:50.770 "uuid": "3317b06e-ce12-5276-856c-fcd72ad12192", 00:14:50.770 "is_configured": true, 00:14:50.770 "data_offset": 2048, 00:14:50.770 "data_size": 63488 00:14:50.770 } 00:14:50.770 ] 00:14:50.770 }' 00:14:50.770 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.030 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.030 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.030 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.030 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 80919 00:14:51.030 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 80919 ']' 00:14:51.030 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 80919 00:14:51.030 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:51.030 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:51.031 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80919 00:14:51.031 killing process with pid 80919 00:14:51.031 Received shutdown signal, test time was about 17.861245 seconds 00:14:51.031 00:14:51.031 Latency(us) 00:14:51.031 [2024-12-14T12:40:50.769Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.031 [2024-12-14T12:40:50.769Z] =================================================================================================================== 00:14:51.031 [2024-12-14T12:40:50.769Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:51.031 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:51.031 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:51.031 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80919' 00:14:51.031 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 80919 00:14:51.031 [2024-12-14 12:40:50.603273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:51.031 [2024-12-14 12:40:50.603405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:51.031 12:40:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 80919 00:14:51.031 [2024-12-14 12:40:50.603473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:51.031 [2024-12-14 12:40:50.603486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:51.290 [2024-12-14 12:40:51.005458] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:52.672 ************************************ 00:14:52.672 END TEST raid_rebuild_test_sb_io 00:14:52.672 ************************************ 00:14:52.672 12:40:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:52.672 00:14:52.672 real 0m21.239s 00:14:52.672 user 0m27.850s 00:14:52.672 sys 0m2.524s 00:14:52.672 12:40:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:52.672 12:40:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.672 12:40:52 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:52.672 12:40:52 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:52.672 12:40:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:52.672 12:40:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:52.672 12:40:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:52.672 ************************************ 00:14:52.672 START TEST raid5f_state_function_test 00:14:52.672 ************************************ 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=81643 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:52.672 Process raid pid: 81643 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 81643' 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 81643 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 81643 ']' 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:52.672 12:40:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:52.672 [2024-12-14 12:40:52.296453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:52.672 [2024-12-14 12:40:52.296657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.932 [2024-12-14 12:40:52.469094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.932 [2024-12-14 12:40:52.579462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.192 [2024-12-14 12:40:52.771522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.192 [2024-12-14 12:40:52.771564] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.452 [2024-12-14 12:40:53.118157] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:53.452 [2024-12-14 12:40:53.118205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:53.452 [2024-12-14 12:40:53.118215] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:53.452 [2024-12-14 12:40:53.118224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:53.452 [2024-12-14 12:40:53.118231] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:53.452 [2024-12-14 12:40:53.118239] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.452 "name": "Existed_Raid", 00:14:53.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.452 "strip_size_kb": 64, 00:14:53.452 "state": "configuring", 00:14:53.452 "raid_level": "raid5f", 00:14:53.452 "superblock": false, 00:14:53.452 "num_base_bdevs": 3, 00:14:53.452 "num_base_bdevs_discovered": 0, 00:14:53.452 "num_base_bdevs_operational": 3, 00:14:53.452 "base_bdevs_list": [ 00:14:53.452 { 00:14:53.452 "name": "BaseBdev1", 00:14:53.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.452 "is_configured": false, 00:14:53.452 "data_offset": 0, 00:14:53.452 "data_size": 0 00:14:53.452 }, 00:14:53.452 { 00:14:53.452 "name": "BaseBdev2", 00:14:53.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.452 "is_configured": false, 00:14:53.452 "data_offset": 0, 00:14:53.452 "data_size": 0 00:14:53.452 }, 00:14:53.452 { 00:14:53.452 "name": "BaseBdev3", 00:14:53.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.452 "is_configured": false, 00:14:53.452 "data_offset": 0, 00:14:53.452 "data_size": 0 00:14:53.452 } 00:14:53.452 ] 00:14:53.452 }' 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.452 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.019 [2024-12-14 12:40:53.509418] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.019 [2024-12-14 12:40:53.509454] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.019 [2024-12-14 12:40:53.521389] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:54.019 [2024-12-14 12:40:53.521430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:54.019 [2024-12-14 12:40:53.521439] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.019 [2024-12-14 12:40:53.521448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.019 [2024-12-14 12:40:53.521454] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.019 [2024-12-14 12:40:53.521463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.019 [2024-12-14 12:40:53.566088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.019 BaseBdev1 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.019 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.019 [ 00:14:54.019 { 00:14:54.019 "name": "BaseBdev1", 00:14:54.019 "aliases": [ 00:14:54.019 "8e33899b-6dec-4d19-aec7-891617238c6f" 00:14:54.019 ], 00:14:54.019 "product_name": "Malloc disk", 00:14:54.019 "block_size": 512, 00:14:54.019 "num_blocks": 65536, 00:14:54.019 "uuid": "8e33899b-6dec-4d19-aec7-891617238c6f", 00:14:54.019 "assigned_rate_limits": { 00:14:54.019 "rw_ios_per_sec": 0, 00:14:54.019 "rw_mbytes_per_sec": 0, 00:14:54.020 "r_mbytes_per_sec": 0, 00:14:54.020 "w_mbytes_per_sec": 0 00:14:54.020 }, 00:14:54.020 "claimed": true, 00:14:54.020 "claim_type": "exclusive_write", 00:14:54.020 "zoned": false, 00:14:54.020 "supported_io_types": { 00:14:54.020 "read": true, 00:14:54.020 "write": true, 00:14:54.020 "unmap": true, 00:14:54.020 "flush": true, 00:14:54.020 "reset": true, 00:14:54.020 "nvme_admin": false, 00:14:54.020 "nvme_io": false, 00:14:54.020 "nvme_io_md": false, 00:14:54.020 "write_zeroes": true, 00:14:54.020 "zcopy": true, 00:14:54.020 "get_zone_info": false, 00:14:54.020 "zone_management": false, 00:14:54.020 "zone_append": false, 00:14:54.020 "compare": false, 00:14:54.020 "compare_and_write": false, 00:14:54.020 "abort": true, 00:14:54.020 "seek_hole": false, 00:14:54.020 "seek_data": false, 00:14:54.020 "copy": true, 00:14:54.020 "nvme_iov_md": false 00:14:54.020 }, 00:14:54.020 "memory_domains": [ 00:14:54.020 { 00:14:54.020 "dma_device_id": "system", 00:14:54.020 "dma_device_type": 1 00:14:54.020 }, 00:14:54.020 { 00:14:54.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.020 "dma_device_type": 2 00:14:54.020 } 00:14:54.020 ], 00:14:54.020 "driver_specific": {} 00:14:54.020 } 00:14:54.020 ] 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.020 "name": "Existed_Raid", 00:14:54.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.020 "strip_size_kb": 64, 00:14:54.020 "state": "configuring", 00:14:54.020 "raid_level": "raid5f", 00:14:54.020 "superblock": false, 00:14:54.020 "num_base_bdevs": 3, 00:14:54.020 "num_base_bdevs_discovered": 1, 00:14:54.020 "num_base_bdevs_operational": 3, 00:14:54.020 "base_bdevs_list": [ 00:14:54.020 { 00:14:54.020 "name": "BaseBdev1", 00:14:54.020 "uuid": "8e33899b-6dec-4d19-aec7-891617238c6f", 00:14:54.020 "is_configured": true, 00:14:54.020 "data_offset": 0, 00:14:54.020 "data_size": 65536 00:14:54.020 }, 00:14:54.020 { 00:14:54.020 "name": "BaseBdev2", 00:14:54.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.020 "is_configured": false, 00:14:54.020 "data_offset": 0, 00:14:54.020 "data_size": 0 00:14:54.020 }, 00:14:54.020 { 00:14:54.020 "name": "BaseBdev3", 00:14:54.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.020 "is_configured": false, 00:14:54.020 "data_offset": 0, 00:14:54.020 "data_size": 0 00:14:54.020 } 00:14:54.020 ] 00:14:54.020 }' 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.020 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.278 12:40:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:54.278 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.278 12:40:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.278 [2024-12-14 12:40:54.001375] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:54.278 [2024-12-14 12:40:54.001486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:54.278 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.278 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:54.278 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.278 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.278 [2024-12-14 12:40:54.013396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:54.278 [2024-12-14 12:40:54.015195] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:54.537 [2024-12-14 12:40:54.015280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:54.537 [2024-12-14 12:40:54.015295] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:54.537 [2024-12-14 12:40:54.015305] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.537 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.538 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.538 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.538 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.538 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.538 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.538 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.538 "name": "Existed_Raid", 00:14:54.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.538 "strip_size_kb": 64, 00:14:54.538 "state": "configuring", 00:14:54.538 "raid_level": "raid5f", 00:14:54.538 "superblock": false, 00:14:54.538 "num_base_bdevs": 3, 00:14:54.538 "num_base_bdevs_discovered": 1, 00:14:54.538 "num_base_bdevs_operational": 3, 00:14:54.538 "base_bdevs_list": [ 00:14:54.538 { 00:14:54.538 "name": "BaseBdev1", 00:14:54.538 "uuid": "8e33899b-6dec-4d19-aec7-891617238c6f", 00:14:54.538 "is_configured": true, 00:14:54.538 "data_offset": 0, 00:14:54.538 "data_size": 65536 00:14:54.538 }, 00:14:54.538 { 00:14:54.538 "name": "BaseBdev2", 00:14:54.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.538 "is_configured": false, 00:14:54.538 "data_offset": 0, 00:14:54.538 "data_size": 0 00:14:54.538 }, 00:14:54.538 { 00:14:54.538 "name": "BaseBdev3", 00:14:54.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.538 "is_configured": false, 00:14:54.538 "data_offset": 0, 00:14:54.538 "data_size": 0 00:14:54.538 } 00:14:54.538 ] 00:14:54.538 }' 00:14:54.538 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.538 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.798 [2024-12-14 12:40:54.467679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:54.798 BaseBdev2 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:54.798 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.799 [ 00:14:54.799 { 00:14:54.799 "name": "BaseBdev2", 00:14:54.799 "aliases": [ 00:14:54.799 "a553dd1a-489b-4578-9ca3-fde5bc03abe0" 00:14:54.799 ], 00:14:54.799 "product_name": "Malloc disk", 00:14:54.799 "block_size": 512, 00:14:54.799 "num_blocks": 65536, 00:14:54.799 "uuid": "a553dd1a-489b-4578-9ca3-fde5bc03abe0", 00:14:54.799 "assigned_rate_limits": { 00:14:54.799 "rw_ios_per_sec": 0, 00:14:54.799 "rw_mbytes_per_sec": 0, 00:14:54.799 "r_mbytes_per_sec": 0, 00:14:54.799 "w_mbytes_per_sec": 0 00:14:54.799 }, 00:14:54.799 "claimed": true, 00:14:54.799 "claim_type": "exclusive_write", 00:14:54.799 "zoned": false, 00:14:54.799 "supported_io_types": { 00:14:54.799 "read": true, 00:14:54.799 "write": true, 00:14:54.799 "unmap": true, 00:14:54.799 "flush": true, 00:14:54.799 "reset": true, 00:14:54.799 "nvme_admin": false, 00:14:54.799 "nvme_io": false, 00:14:54.799 "nvme_io_md": false, 00:14:54.799 "write_zeroes": true, 00:14:54.799 "zcopy": true, 00:14:54.799 "get_zone_info": false, 00:14:54.799 "zone_management": false, 00:14:54.799 "zone_append": false, 00:14:54.799 "compare": false, 00:14:54.799 "compare_and_write": false, 00:14:54.799 "abort": true, 00:14:54.799 "seek_hole": false, 00:14:54.799 "seek_data": false, 00:14:54.799 "copy": true, 00:14:54.799 "nvme_iov_md": false 00:14:54.799 }, 00:14:54.799 "memory_domains": [ 00:14:54.799 { 00:14:54.799 "dma_device_id": "system", 00:14:54.799 "dma_device_type": 1 00:14:54.799 }, 00:14:54.799 { 00:14:54.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.799 "dma_device_type": 2 00:14:54.799 } 00:14:54.799 ], 00:14:54.799 "driver_specific": {} 00:14:54.799 } 00:14:54.799 ] 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.799 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.091 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.091 "name": "Existed_Raid", 00:14:55.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.091 "strip_size_kb": 64, 00:14:55.091 "state": "configuring", 00:14:55.091 "raid_level": "raid5f", 00:14:55.091 "superblock": false, 00:14:55.091 "num_base_bdevs": 3, 00:14:55.091 "num_base_bdevs_discovered": 2, 00:14:55.091 "num_base_bdevs_operational": 3, 00:14:55.091 "base_bdevs_list": [ 00:14:55.091 { 00:14:55.091 "name": "BaseBdev1", 00:14:55.091 "uuid": "8e33899b-6dec-4d19-aec7-891617238c6f", 00:14:55.091 "is_configured": true, 00:14:55.091 "data_offset": 0, 00:14:55.091 "data_size": 65536 00:14:55.091 }, 00:14:55.091 { 00:14:55.091 "name": "BaseBdev2", 00:14:55.091 "uuid": "a553dd1a-489b-4578-9ca3-fde5bc03abe0", 00:14:55.091 "is_configured": true, 00:14:55.091 "data_offset": 0, 00:14:55.091 "data_size": 65536 00:14:55.091 }, 00:14:55.091 { 00:14:55.091 "name": "BaseBdev3", 00:14:55.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.091 "is_configured": false, 00:14:55.091 "data_offset": 0, 00:14:55.091 "data_size": 0 00:14:55.091 } 00:14:55.091 ] 00:14:55.091 }' 00:14:55.091 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.091 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.351 [2024-12-14 12:40:54.984728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:55.351 [2024-12-14 12:40:54.984878] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:55.351 [2024-12-14 12:40:54.984902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:55.351 [2024-12-14 12:40:54.985201] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:55.351 [2024-12-14 12:40:54.990731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:55.351 [2024-12-14 12:40:54.990799] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:55.351 [2024-12-14 12:40:54.991125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.351 BaseBdev3 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.351 12:40:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.351 [ 00:14:55.351 { 00:14:55.351 "name": "BaseBdev3", 00:14:55.351 "aliases": [ 00:14:55.351 "23d8047d-0057-4c87-8f99-55b33a1d9076" 00:14:55.351 ], 00:14:55.351 "product_name": "Malloc disk", 00:14:55.351 "block_size": 512, 00:14:55.351 "num_blocks": 65536, 00:14:55.351 "uuid": "23d8047d-0057-4c87-8f99-55b33a1d9076", 00:14:55.351 "assigned_rate_limits": { 00:14:55.351 "rw_ios_per_sec": 0, 00:14:55.351 "rw_mbytes_per_sec": 0, 00:14:55.351 "r_mbytes_per_sec": 0, 00:14:55.351 "w_mbytes_per_sec": 0 00:14:55.351 }, 00:14:55.351 "claimed": true, 00:14:55.351 "claim_type": "exclusive_write", 00:14:55.351 "zoned": false, 00:14:55.351 "supported_io_types": { 00:14:55.351 "read": true, 00:14:55.351 "write": true, 00:14:55.351 "unmap": true, 00:14:55.351 "flush": true, 00:14:55.351 "reset": true, 00:14:55.351 "nvme_admin": false, 00:14:55.351 "nvme_io": false, 00:14:55.351 "nvme_io_md": false, 00:14:55.351 "write_zeroes": true, 00:14:55.351 "zcopy": true, 00:14:55.351 "get_zone_info": false, 00:14:55.351 "zone_management": false, 00:14:55.351 "zone_append": false, 00:14:55.351 "compare": false, 00:14:55.351 "compare_and_write": false, 00:14:55.351 "abort": true, 00:14:55.351 "seek_hole": false, 00:14:55.351 "seek_data": false, 00:14:55.351 "copy": true, 00:14:55.351 "nvme_iov_md": false 00:14:55.351 }, 00:14:55.351 "memory_domains": [ 00:14:55.351 { 00:14:55.351 "dma_device_id": "system", 00:14:55.351 "dma_device_type": 1 00:14:55.351 }, 00:14:55.351 { 00:14:55.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.351 "dma_device_type": 2 00:14:55.351 } 00:14:55.351 ], 00:14:55.351 "driver_specific": {} 00:14:55.351 } 00:14:55.351 ] 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.351 "name": "Existed_Raid", 00:14:55.351 "uuid": "26bb46f4-9747-4c38-8d1f-dac9e47800f4", 00:14:55.351 "strip_size_kb": 64, 00:14:55.351 "state": "online", 00:14:55.351 "raid_level": "raid5f", 00:14:55.351 "superblock": false, 00:14:55.351 "num_base_bdevs": 3, 00:14:55.351 "num_base_bdevs_discovered": 3, 00:14:55.351 "num_base_bdevs_operational": 3, 00:14:55.351 "base_bdevs_list": [ 00:14:55.351 { 00:14:55.351 "name": "BaseBdev1", 00:14:55.351 "uuid": "8e33899b-6dec-4d19-aec7-891617238c6f", 00:14:55.351 "is_configured": true, 00:14:55.351 "data_offset": 0, 00:14:55.351 "data_size": 65536 00:14:55.351 }, 00:14:55.351 { 00:14:55.351 "name": "BaseBdev2", 00:14:55.351 "uuid": "a553dd1a-489b-4578-9ca3-fde5bc03abe0", 00:14:55.351 "is_configured": true, 00:14:55.351 "data_offset": 0, 00:14:55.351 "data_size": 65536 00:14:55.351 }, 00:14:55.351 { 00:14:55.351 "name": "BaseBdev3", 00:14:55.351 "uuid": "23d8047d-0057-4c87-8f99-55b33a1d9076", 00:14:55.351 "is_configured": true, 00:14:55.351 "data_offset": 0, 00:14:55.351 "data_size": 65536 00:14:55.351 } 00:14:55.351 ] 00:14:55.351 }' 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.351 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.921 [2024-12-14 12:40:55.512626] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:55.921 "name": "Existed_Raid", 00:14:55.921 "aliases": [ 00:14:55.921 "26bb46f4-9747-4c38-8d1f-dac9e47800f4" 00:14:55.921 ], 00:14:55.921 "product_name": "Raid Volume", 00:14:55.921 "block_size": 512, 00:14:55.921 "num_blocks": 131072, 00:14:55.921 "uuid": "26bb46f4-9747-4c38-8d1f-dac9e47800f4", 00:14:55.921 "assigned_rate_limits": { 00:14:55.921 "rw_ios_per_sec": 0, 00:14:55.921 "rw_mbytes_per_sec": 0, 00:14:55.921 "r_mbytes_per_sec": 0, 00:14:55.921 "w_mbytes_per_sec": 0 00:14:55.921 }, 00:14:55.921 "claimed": false, 00:14:55.921 "zoned": false, 00:14:55.921 "supported_io_types": { 00:14:55.921 "read": true, 00:14:55.921 "write": true, 00:14:55.921 "unmap": false, 00:14:55.921 "flush": false, 00:14:55.921 "reset": true, 00:14:55.921 "nvme_admin": false, 00:14:55.921 "nvme_io": false, 00:14:55.921 "nvme_io_md": false, 00:14:55.921 "write_zeroes": true, 00:14:55.921 "zcopy": false, 00:14:55.921 "get_zone_info": false, 00:14:55.921 "zone_management": false, 00:14:55.921 "zone_append": false, 00:14:55.921 "compare": false, 00:14:55.921 "compare_and_write": false, 00:14:55.921 "abort": false, 00:14:55.921 "seek_hole": false, 00:14:55.921 "seek_data": false, 00:14:55.921 "copy": false, 00:14:55.921 "nvme_iov_md": false 00:14:55.921 }, 00:14:55.921 "driver_specific": { 00:14:55.921 "raid": { 00:14:55.921 "uuid": "26bb46f4-9747-4c38-8d1f-dac9e47800f4", 00:14:55.921 "strip_size_kb": 64, 00:14:55.921 "state": "online", 00:14:55.921 "raid_level": "raid5f", 00:14:55.921 "superblock": false, 00:14:55.921 "num_base_bdevs": 3, 00:14:55.921 "num_base_bdevs_discovered": 3, 00:14:55.921 "num_base_bdevs_operational": 3, 00:14:55.921 "base_bdevs_list": [ 00:14:55.921 { 00:14:55.921 "name": "BaseBdev1", 00:14:55.921 "uuid": "8e33899b-6dec-4d19-aec7-891617238c6f", 00:14:55.921 "is_configured": true, 00:14:55.921 "data_offset": 0, 00:14:55.921 "data_size": 65536 00:14:55.921 }, 00:14:55.921 { 00:14:55.921 "name": "BaseBdev2", 00:14:55.921 "uuid": "a553dd1a-489b-4578-9ca3-fde5bc03abe0", 00:14:55.921 "is_configured": true, 00:14:55.921 "data_offset": 0, 00:14:55.921 "data_size": 65536 00:14:55.921 }, 00:14:55.921 { 00:14:55.921 "name": "BaseBdev3", 00:14:55.921 "uuid": "23d8047d-0057-4c87-8f99-55b33a1d9076", 00:14:55.921 "is_configured": true, 00:14:55.921 "data_offset": 0, 00:14:55.921 "data_size": 65536 00:14:55.921 } 00:14:55.921 ] 00:14:55.921 } 00:14:55.921 } 00:14:55.921 }' 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:55.921 BaseBdev2 00:14:55.921 BaseBdev3' 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.921 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.181 [2024-12-14 12:40:55.776008] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.181 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.441 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.441 "name": "Existed_Raid", 00:14:56.441 "uuid": "26bb46f4-9747-4c38-8d1f-dac9e47800f4", 00:14:56.441 "strip_size_kb": 64, 00:14:56.441 "state": "online", 00:14:56.441 "raid_level": "raid5f", 00:14:56.441 "superblock": false, 00:14:56.441 "num_base_bdevs": 3, 00:14:56.441 "num_base_bdevs_discovered": 2, 00:14:56.441 "num_base_bdevs_operational": 2, 00:14:56.441 "base_bdevs_list": [ 00:14:56.441 { 00:14:56.441 "name": null, 00:14:56.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.441 "is_configured": false, 00:14:56.441 "data_offset": 0, 00:14:56.441 "data_size": 65536 00:14:56.441 }, 00:14:56.441 { 00:14:56.441 "name": "BaseBdev2", 00:14:56.441 "uuid": "a553dd1a-489b-4578-9ca3-fde5bc03abe0", 00:14:56.441 "is_configured": true, 00:14:56.441 "data_offset": 0, 00:14:56.441 "data_size": 65536 00:14:56.441 }, 00:14:56.441 { 00:14:56.441 "name": "BaseBdev3", 00:14:56.441 "uuid": "23d8047d-0057-4c87-8f99-55b33a1d9076", 00:14:56.441 "is_configured": true, 00:14:56.441 "data_offset": 0, 00:14:56.441 "data_size": 65536 00:14:56.441 } 00:14:56.441 ] 00:14:56.441 }' 00:14:56.442 12:40:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.442 12:40:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.702 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.702 [2024-12-14 12:40:56.374204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:56.702 [2024-12-14 12:40:56.374344] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.962 [2024-12-14 12:40:56.467487] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.962 [2024-12-14 12:40:56.527444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:56.962 [2024-12-14 12:40:56.527493] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.962 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.963 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:56.963 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:56.963 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:56.963 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:56.963 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:56.963 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:56.963 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.963 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.223 BaseBdev2 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.223 [ 00:14:57.223 { 00:14:57.223 "name": "BaseBdev2", 00:14:57.223 "aliases": [ 00:14:57.223 "94276334-25e9-4c89-973f-80a5709ecb82" 00:14:57.223 ], 00:14:57.223 "product_name": "Malloc disk", 00:14:57.223 "block_size": 512, 00:14:57.223 "num_blocks": 65536, 00:14:57.223 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:14:57.223 "assigned_rate_limits": { 00:14:57.223 "rw_ios_per_sec": 0, 00:14:57.223 "rw_mbytes_per_sec": 0, 00:14:57.223 "r_mbytes_per_sec": 0, 00:14:57.223 "w_mbytes_per_sec": 0 00:14:57.223 }, 00:14:57.223 "claimed": false, 00:14:57.223 "zoned": false, 00:14:57.223 "supported_io_types": { 00:14:57.223 "read": true, 00:14:57.223 "write": true, 00:14:57.223 "unmap": true, 00:14:57.223 "flush": true, 00:14:57.223 "reset": true, 00:14:57.223 "nvme_admin": false, 00:14:57.223 "nvme_io": false, 00:14:57.223 "nvme_io_md": false, 00:14:57.223 "write_zeroes": true, 00:14:57.223 "zcopy": true, 00:14:57.223 "get_zone_info": false, 00:14:57.223 "zone_management": false, 00:14:57.223 "zone_append": false, 00:14:57.223 "compare": false, 00:14:57.223 "compare_and_write": false, 00:14:57.223 "abort": true, 00:14:57.223 "seek_hole": false, 00:14:57.223 "seek_data": false, 00:14:57.223 "copy": true, 00:14:57.223 "nvme_iov_md": false 00:14:57.223 }, 00:14:57.223 "memory_domains": [ 00:14:57.223 { 00:14:57.223 "dma_device_id": "system", 00:14:57.223 "dma_device_type": 1 00:14:57.223 }, 00:14:57.223 { 00:14:57.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.223 "dma_device_type": 2 00:14:57.223 } 00:14:57.223 ], 00:14:57.223 "driver_specific": {} 00:14:57.223 } 00:14:57.223 ] 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.223 BaseBdev3 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.223 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.223 [ 00:14:57.223 { 00:14:57.223 "name": "BaseBdev3", 00:14:57.223 "aliases": [ 00:14:57.223 "2e3e593e-5495-480e-8b84-72521ea85670" 00:14:57.223 ], 00:14:57.223 "product_name": "Malloc disk", 00:14:57.223 "block_size": 512, 00:14:57.223 "num_blocks": 65536, 00:14:57.223 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:14:57.223 "assigned_rate_limits": { 00:14:57.223 "rw_ios_per_sec": 0, 00:14:57.223 "rw_mbytes_per_sec": 0, 00:14:57.223 "r_mbytes_per_sec": 0, 00:14:57.223 "w_mbytes_per_sec": 0 00:14:57.223 }, 00:14:57.223 "claimed": false, 00:14:57.223 "zoned": false, 00:14:57.223 "supported_io_types": { 00:14:57.224 "read": true, 00:14:57.224 "write": true, 00:14:57.224 "unmap": true, 00:14:57.224 "flush": true, 00:14:57.224 "reset": true, 00:14:57.224 "nvme_admin": false, 00:14:57.224 "nvme_io": false, 00:14:57.224 "nvme_io_md": false, 00:14:57.224 "write_zeroes": true, 00:14:57.224 "zcopy": true, 00:14:57.224 "get_zone_info": false, 00:14:57.224 "zone_management": false, 00:14:57.224 "zone_append": false, 00:14:57.224 "compare": false, 00:14:57.224 "compare_and_write": false, 00:14:57.224 "abort": true, 00:14:57.224 "seek_hole": false, 00:14:57.224 "seek_data": false, 00:14:57.224 "copy": true, 00:14:57.224 "nvme_iov_md": false 00:14:57.224 }, 00:14:57.224 "memory_domains": [ 00:14:57.224 { 00:14:57.224 "dma_device_id": "system", 00:14:57.224 "dma_device_type": 1 00:14:57.224 }, 00:14:57.224 { 00:14:57.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.224 "dma_device_type": 2 00:14:57.224 } 00:14:57.224 ], 00:14:57.224 "driver_specific": {} 00:14:57.224 } 00:14:57.224 ] 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.224 [2024-12-14 12:40:56.835167] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:57.224 [2024-12-14 12:40:56.835260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:57.224 [2024-12-14 12:40:56.835302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.224 [2024-12-14 12:40:56.837162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.224 "name": "Existed_Raid", 00:14:57.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.224 "strip_size_kb": 64, 00:14:57.224 "state": "configuring", 00:14:57.224 "raid_level": "raid5f", 00:14:57.224 "superblock": false, 00:14:57.224 "num_base_bdevs": 3, 00:14:57.224 "num_base_bdevs_discovered": 2, 00:14:57.224 "num_base_bdevs_operational": 3, 00:14:57.224 "base_bdevs_list": [ 00:14:57.224 { 00:14:57.224 "name": "BaseBdev1", 00:14:57.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.224 "is_configured": false, 00:14:57.224 "data_offset": 0, 00:14:57.224 "data_size": 0 00:14:57.224 }, 00:14:57.224 { 00:14:57.224 "name": "BaseBdev2", 00:14:57.224 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:14:57.224 "is_configured": true, 00:14:57.224 "data_offset": 0, 00:14:57.224 "data_size": 65536 00:14:57.224 }, 00:14:57.224 { 00:14:57.224 "name": "BaseBdev3", 00:14:57.224 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:14:57.224 "is_configured": true, 00:14:57.224 "data_offset": 0, 00:14:57.224 "data_size": 65536 00:14:57.224 } 00:14:57.224 ] 00:14:57.224 }' 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.224 12:40:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.794 [2024-12-14 12:40:57.298390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.794 "name": "Existed_Raid", 00:14:57.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.794 "strip_size_kb": 64, 00:14:57.794 "state": "configuring", 00:14:57.794 "raid_level": "raid5f", 00:14:57.794 "superblock": false, 00:14:57.794 "num_base_bdevs": 3, 00:14:57.794 "num_base_bdevs_discovered": 1, 00:14:57.794 "num_base_bdevs_operational": 3, 00:14:57.794 "base_bdevs_list": [ 00:14:57.794 { 00:14:57.794 "name": "BaseBdev1", 00:14:57.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.794 "is_configured": false, 00:14:57.794 "data_offset": 0, 00:14:57.794 "data_size": 0 00:14:57.794 }, 00:14:57.794 { 00:14:57.794 "name": null, 00:14:57.794 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:14:57.794 "is_configured": false, 00:14:57.794 "data_offset": 0, 00:14:57.794 "data_size": 65536 00:14:57.794 }, 00:14:57.794 { 00:14:57.794 "name": "BaseBdev3", 00:14:57.794 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:14:57.794 "is_configured": true, 00:14:57.794 "data_offset": 0, 00:14:57.794 "data_size": 65536 00:14:57.794 } 00:14:57.794 ] 00:14:57.794 }' 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.794 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.054 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.314 [2024-12-14 12:40:57.814342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.314 BaseBdev1 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.314 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.314 [ 00:14:58.314 { 00:14:58.314 "name": "BaseBdev1", 00:14:58.314 "aliases": [ 00:14:58.314 "944b5be0-e958-45f3-b6ae-3d85f7f8a56b" 00:14:58.314 ], 00:14:58.314 "product_name": "Malloc disk", 00:14:58.314 "block_size": 512, 00:14:58.314 "num_blocks": 65536, 00:14:58.314 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:14:58.314 "assigned_rate_limits": { 00:14:58.314 "rw_ios_per_sec": 0, 00:14:58.314 "rw_mbytes_per_sec": 0, 00:14:58.314 "r_mbytes_per_sec": 0, 00:14:58.314 "w_mbytes_per_sec": 0 00:14:58.314 }, 00:14:58.314 "claimed": true, 00:14:58.314 "claim_type": "exclusive_write", 00:14:58.314 "zoned": false, 00:14:58.314 "supported_io_types": { 00:14:58.314 "read": true, 00:14:58.315 "write": true, 00:14:58.315 "unmap": true, 00:14:58.315 "flush": true, 00:14:58.315 "reset": true, 00:14:58.315 "nvme_admin": false, 00:14:58.315 "nvme_io": false, 00:14:58.315 "nvme_io_md": false, 00:14:58.315 "write_zeroes": true, 00:14:58.315 "zcopy": true, 00:14:58.315 "get_zone_info": false, 00:14:58.315 "zone_management": false, 00:14:58.315 "zone_append": false, 00:14:58.315 "compare": false, 00:14:58.315 "compare_and_write": false, 00:14:58.315 "abort": true, 00:14:58.315 "seek_hole": false, 00:14:58.315 "seek_data": false, 00:14:58.315 "copy": true, 00:14:58.315 "nvme_iov_md": false 00:14:58.315 }, 00:14:58.315 "memory_domains": [ 00:14:58.315 { 00:14:58.315 "dma_device_id": "system", 00:14:58.315 "dma_device_type": 1 00:14:58.315 }, 00:14:58.315 { 00:14:58.315 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:58.315 "dma_device_type": 2 00:14:58.315 } 00:14:58.315 ], 00:14:58.315 "driver_specific": {} 00:14:58.315 } 00:14:58.315 ] 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.315 "name": "Existed_Raid", 00:14:58.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.315 "strip_size_kb": 64, 00:14:58.315 "state": "configuring", 00:14:58.315 "raid_level": "raid5f", 00:14:58.315 "superblock": false, 00:14:58.315 "num_base_bdevs": 3, 00:14:58.315 "num_base_bdevs_discovered": 2, 00:14:58.315 "num_base_bdevs_operational": 3, 00:14:58.315 "base_bdevs_list": [ 00:14:58.315 { 00:14:58.315 "name": "BaseBdev1", 00:14:58.315 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:14:58.315 "is_configured": true, 00:14:58.315 "data_offset": 0, 00:14:58.315 "data_size": 65536 00:14:58.315 }, 00:14:58.315 { 00:14:58.315 "name": null, 00:14:58.315 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:14:58.315 "is_configured": false, 00:14:58.315 "data_offset": 0, 00:14:58.315 "data_size": 65536 00:14:58.315 }, 00:14:58.315 { 00:14:58.315 "name": "BaseBdev3", 00:14:58.315 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:14:58.315 "is_configured": true, 00:14:58.315 "data_offset": 0, 00:14:58.315 "data_size": 65536 00:14:58.315 } 00:14:58.315 ] 00:14:58.315 }' 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.315 12:40:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.574 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:58.574 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.574 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.574 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.574 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.834 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:58.834 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.835 [2024-12-14 12:40:58.325524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.835 "name": "Existed_Raid", 00:14:58.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.835 "strip_size_kb": 64, 00:14:58.835 "state": "configuring", 00:14:58.835 "raid_level": "raid5f", 00:14:58.835 "superblock": false, 00:14:58.835 "num_base_bdevs": 3, 00:14:58.835 "num_base_bdevs_discovered": 1, 00:14:58.835 "num_base_bdevs_operational": 3, 00:14:58.835 "base_bdevs_list": [ 00:14:58.835 { 00:14:58.835 "name": "BaseBdev1", 00:14:58.835 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:14:58.835 "is_configured": true, 00:14:58.835 "data_offset": 0, 00:14:58.835 "data_size": 65536 00:14:58.835 }, 00:14:58.835 { 00:14:58.835 "name": null, 00:14:58.835 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:14:58.835 "is_configured": false, 00:14:58.835 "data_offset": 0, 00:14:58.835 "data_size": 65536 00:14:58.835 }, 00:14:58.835 { 00:14:58.835 "name": null, 00:14:58.835 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:14:58.835 "is_configured": false, 00:14:58.835 "data_offset": 0, 00:14:58.835 "data_size": 65536 00:14:58.835 } 00:14:58.835 ] 00:14:58.835 }' 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.835 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.095 [2024-12-14 12:40:58.800753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.095 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.354 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.354 "name": "Existed_Raid", 00:14:59.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.354 "strip_size_kb": 64, 00:14:59.354 "state": "configuring", 00:14:59.354 "raid_level": "raid5f", 00:14:59.354 "superblock": false, 00:14:59.354 "num_base_bdevs": 3, 00:14:59.354 "num_base_bdevs_discovered": 2, 00:14:59.354 "num_base_bdevs_operational": 3, 00:14:59.354 "base_bdevs_list": [ 00:14:59.354 { 00:14:59.354 "name": "BaseBdev1", 00:14:59.354 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:14:59.354 "is_configured": true, 00:14:59.354 "data_offset": 0, 00:14:59.354 "data_size": 65536 00:14:59.354 }, 00:14:59.354 { 00:14:59.354 "name": null, 00:14:59.354 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:14:59.354 "is_configured": false, 00:14:59.354 "data_offset": 0, 00:14:59.354 "data_size": 65536 00:14:59.354 }, 00:14:59.354 { 00:14:59.354 "name": "BaseBdev3", 00:14:59.354 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:14:59.354 "is_configured": true, 00:14:59.354 "data_offset": 0, 00:14:59.354 "data_size": 65536 00:14:59.354 } 00:14:59.354 ] 00:14:59.354 }' 00:14:59.354 12:40:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.354 12:40:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.614 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.614 [2024-12-14 12:40:59.315870] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.874 "name": "Existed_Raid", 00:14:59.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.874 "strip_size_kb": 64, 00:14:59.874 "state": "configuring", 00:14:59.874 "raid_level": "raid5f", 00:14:59.874 "superblock": false, 00:14:59.874 "num_base_bdevs": 3, 00:14:59.874 "num_base_bdevs_discovered": 1, 00:14:59.874 "num_base_bdevs_operational": 3, 00:14:59.874 "base_bdevs_list": [ 00:14:59.874 { 00:14:59.874 "name": null, 00:14:59.874 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:14:59.874 "is_configured": false, 00:14:59.874 "data_offset": 0, 00:14:59.874 "data_size": 65536 00:14:59.874 }, 00:14:59.874 { 00:14:59.874 "name": null, 00:14:59.874 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:14:59.874 "is_configured": false, 00:14:59.874 "data_offset": 0, 00:14:59.874 "data_size": 65536 00:14:59.874 }, 00:14:59.874 { 00:14:59.874 "name": "BaseBdev3", 00:14:59.874 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:14:59.874 "is_configured": true, 00:14:59.874 "data_offset": 0, 00:14:59.874 "data_size": 65536 00:14:59.874 } 00:14:59.874 ] 00:14:59.874 }' 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.874 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.134 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.134 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:00.134 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.134 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.393 [2024-12-14 12:40:59.909933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.393 "name": "Existed_Raid", 00:15:00.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.393 "strip_size_kb": 64, 00:15:00.393 "state": "configuring", 00:15:00.393 "raid_level": "raid5f", 00:15:00.393 "superblock": false, 00:15:00.393 "num_base_bdevs": 3, 00:15:00.393 "num_base_bdevs_discovered": 2, 00:15:00.393 "num_base_bdevs_operational": 3, 00:15:00.393 "base_bdevs_list": [ 00:15:00.393 { 00:15:00.393 "name": null, 00:15:00.393 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:15:00.393 "is_configured": false, 00:15:00.393 "data_offset": 0, 00:15:00.393 "data_size": 65536 00:15:00.393 }, 00:15:00.393 { 00:15:00.393 "name": "BaseBdev2", 00:15:00.393 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:15:00.393 "is_configured": true, 00:15:00.393 "data_offset": 0, 00:15:00.393 "data_size": 65536 00:15:00.393 }, 00:15:00.393 { 00:15:00.393 "name": "BaseBdev3", 00:15:00.393 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:15:00.393 "is_configured": true, 00:15:00.393 "data_offset": 0, 00:15:00.393 "data_size": 65536 00:15:00.393 } 00:15:00.393 ] 00:15:00.393 }' 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.393 12:40:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.653 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 944b5be0-e958-45f3-b6ae-3d85f7f8a56b 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.913 [2024-12-14 12:41:00.469203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:00.913 [2024-12-14 12:41:00.469249] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:00.913 [2024-12-14 12:41:00.469259] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:00.913 [2024-12-14 12:41:00.469490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:00.913 [2024-12-14 12:41:00.474553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:00.913 [2024-12-14 12:41:00.474574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:00.913 [2024-12-14 12:41:00.474812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.913 NewBaseBdev 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.913 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.914 [ 00:15:00.914 { 00:15:00.914 "name": "NewBaseBdev", 00:15:00.914 "aliases": [ 00:15:00.914 "944b5be0-e958-45f3-b6ae-3d85f7f8a56b" 00:15:00.914 ], 00:15:00.914 "product_name": "Malloc disk", 00:15:00.914 "block_size": 512, 00:15:00.914 "num_blocks": 65536, 00:15:00.914 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:15:00.914 "assigned_rate_limits": { 00:15:00.914 "rw_ios_per_sec": 0, 00:15:00.914 "rw_mbytes_per_sec": 0, 00:15:00.914 "r_mbytes_per_sec": 0, 00:15:00.914 "w_mbytes_per_sec": 0 00:15:00.914 }, 00:15:00.914 "claimed": true, 00:15:00.914 "claim_type": "exclusive_write", 00:15:00.914 "zoned": false, 00:15:00.914 "supported_io_types": { 00:15:00.914 "read": true, 00:15:00.914 "write": true, 00:15:00.914 "unmap": true, 00:15:00.914 "flush": true, 00:15:00.914 "reset": true, 00:15:00.914 "nvme_admin": false, 00:15:00.914 "nvme_io": false, 00:15:00.914 "nvme_io_md": false, 00:15:00.914 "write_zeroes": true, 00:15:00.914 "zcopy": true, 00:15:00.914 "get_zone_info": false, 00:15:00.914 "zone_management": false, 00:15:00.914 "zone_append": false, 00:15:00.914 "compare": false, 00:15:00.914 "compare_and_write": false, 00:15:00.914 "abort": true, 00:15:00.914 "seek_hole": false, 00:15:00.914 "seek_data": false, 00:15:00.914 "copy": true, 00:15:00.914 "nvme_iov_md": false 00:15:00.914 }, 00:15:00.914 "memory_domains": [ 00:15:00.914 { 00:15:00.914 "dma_device_id": "system", 00:15:00.914 "dma_device_type": 1 00:15:00.914 }, 00:15:00.914 { 00:15:00.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.914 "dma_device_type": 2 00:15:00.914 } 00:15:00.914 ], 00:15:00.914 "driver_specific": {} 00:15:00.914 } 00:15:00.914 ] 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.914 "name": "Existed_Raid", 00:15:00.914 "uuid": "3a980349-1508-4023-aa94-dd8ef4306d50", 00:15:00.914 "strip_size_kb": 64, 00:15:00.914 "state": "online", 00:15:00.914 "raid_level": "raid5f", 00:15:00.914 "superblock": false, 00:15:00.914 "num_base_bdevs": 3, 00:15:00.914 "num_base_bdevs_discovered": 3, 00:15:00.914 "num_base_bdevs_operational": 3, 00:15:00.914 "base_bdevs_list": [ 00:15:00.914 { 00:15:00.914 "name": "NewBaseBdev", 00:15:00.914 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:15:00.914 "is_configured": true, 00:15:00.914 "data_offset": 0, 00:15:00.914 "data_size": 65536 00:15:00.914 }, 00:15:00.914 { 00:15:00.914 "name": "BaseBdev2", 00:15:00.914 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:15:00.914 "is_configured": true, 00:15:00.914 "data_offset": 0, 00:15:00.914 "data_size": 65536 00:15:00.914 }, 00:15:00.914 { 00:15:00.914 "name": "BaseBdev3", 00:15:00.914 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:15:00.914 "is_configured": true, 00:15:00.914 "data_offset": 0, 00:15:00.914 "data_size": 65536 00:15:00.914 } 00:15:00.914 ] 00:15:00.914 }' 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.914 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.484 [2024-12-14 12:41:00.936372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:01.484 "name": "Existed_Raid", 00:15:01.484 "aliases": [ 00:15:01.484 "3a980349-1508-4023-aa94-dd8ef4306d50" 00:15:01.484 ], 00:15:01.484 "product_name": "Raid Volume", 00:15:01.484 "block_size": 512, 00:15:01.484 "num_blocks": 131072, 00:15:01.484 "uuid": "3a980349-1508-4023-aa94-dd8ef4306d50", 00:15:01.484 "assigned_rate_limits": { 00:15:01.484 "rw_ios_per_sec": 0, 00:15:01.484 "rw_mbytes_per_sec": 0, 00:15:01.484 "r_mbytes_per_sec": 0, 00:15:01.484 "w_mbytes_per_sec": 0 00:15:01.484 }, 00:15:01.484 "claimed": false, 00:15:01.484 "zoned": false, 00:15:01.484 "supported_io_types": { 00:15:01.484 "read": true, 00:15:01.484 "write": true, 00:15:01.484 "unmap": false, 00:15:01.484 "flush": false, 00:15:01.484 "reset": true, 00:15:01.484 "nvme_admin": false, 00:15:01.484 "nvme_io": false, 00:15:01.484 "nvme_io_md": false, 00:15:01.484 "write_zeroes": true, 00:15:01.484 "zcopy": false, 00:15:01.484 "get_zone_info": false, 00:15:01.484 "zone_management": false, 00:15:01.484 "zone_append": false, 00:15:01.484 "compare": false, 00:15:01.484 "compare_and_write": false, 00:15:01.484 "abort": false, 00:15:01.484 "seek_hole": false, 00:15:01.484 "seek_data": false, 00:15:01.484 "copy": false, 00:15:01.484 "nvme_iov_md": false 00:15:01.484 }, 00:15:01.484 "driver_specific": { 00:15:01.484 "raid": { 00:15:01.484 "uuid": "3a980349-1508-4023-aa94-dd8ef4306d50", 00:15:01.484 "strip_size_kb": 64, 00:15:01.484 "state": "online", 00:15:01.484 "raid_level": "raid5f", 00:15:01.484 "superblock": false, 00:15:01.484 "num_base_bdevs": 3, 00:15:01.484 "num_base_bdevs_discovered": 3, 00:15:01.484 "num_base_bdevs_operational": 3, 00:15:01.484 "base_bdevs_list": [ 00:15:01.484 { 00:15:01.484 "name": "NewBaseBdev", 00:15:01.484 "uuid": "944b5be0-e958-45f3-b6ae-3d85f7f8a56b", 00:15:01.484 "is_configured": true, 00:15:01.484 "data_offset": 0, 00:15:01.484 "data_size": 65536 00:15:01.484 }, 00:15:01.484 { 00:15:01.484 "name": "BaseBdev2", 00:15:01.484 "uuid": "94276334-25e9-4c89-973f-80a5709ecb82", 00:15:01.484 "is_configured": true, 00:15:01.484 "data_offset": 0, 00:15:01.484 "data_size": 65536 00:15:01.484 }, 00:15:01.484 { 00:15:01.484 "name": "BaseBdev3", 00:15:01.484 "uuid": "2e3e593e-5495-480e-8b84-72521ea85670", 00:15:01.484 "is_configured": true, 00:15:01.484 "data_offset": 0, 00:15:01.484 "data_size": 65536 00:15:01.484 } 00:15:01.484 ] 00:15:01.484 } 00:15:01.484 } 00:15:01.484 }' 00:15:01.484 12:41:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:01.484 BaseBdev2 00:15:01.484 BaseBdev3' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.484 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.485 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:01.485 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:01.485 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:01.485 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.485 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.485 [2024-12-14 12:41:01.215679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.485 [2024-12-14 12:41:01.215708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.485 [2024-12-14 12:41:01.215787] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.485 [2024-12-14 12:41:01.216079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.485 [2024-12-14 12:41:01.216094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 81643 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 81643 ']' 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 81643 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81643 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81643' 00:15:01.745 killing process with pid 81643 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 81643 00:15:01.745 [2024-12-14 12:41:01.263566] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.745 12:41:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 81643 00:15:02.004 [2024-12-14 12:41:01.548691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:02.942 12:41:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:02.942 00:15:02.942 real 0m10.420s 00:15:02.942 user 0m16.626s 00:15:02.942 sys 0m1.813s 00:15:02.942 ************************************ 00:15:02.942 END TEST raid5f_state_function_test 00:15:02.943 ************************************ 00:15:02.943 12:41:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.943 12:41:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.204 12:41:02 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:03.204 12:41:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:03.204 12:41:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.204 12:41:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.204 ************************************ 00:15:03.204 START TEST raid5f_state_function_test_sb 00:15:03.204 ************************************ 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=82264 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82264' 00:15:03.204 Process raid pid: 82264 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 82264 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82264 ']' 00:15:03.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.204 12:41:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.204 [2024-12-14 12:41:02.790567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:03.204 [2024-12-14 12:41:02.790674] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.464 [2024-12-14 12:41:02.964179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.464 [2024-12-14 12:41:03.073200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.724 [2024-12-14 12:41:03.268203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.724 [2024-12-14 12:41:03.268233] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.984 [2024-12-14 12:41:03.602789] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:03.984 [2024-12-14 12:41:03.602842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:03.984 [2024-12-14 12:41:03.602852] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:03.984 [2024-12-14 12:41:03.602862] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:03.984 [2024-12-14 12:41:03.602873] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:03.984 [2024-12-14 12:41:03.602881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.984 "name": "Existed_Raid", 00:15:03.984 "uuid": "da875ee7-06c6-4b0e-ae56-2e12e401e022", 00:15:03.984 "strip_size_kb": 64, 00:15:03.984 "state": "configuring", 00:15:03.984 "raid_level": "raid5f", 00:15:03.984 "superblock": true, 00:15:03.984 "num_base_bdevs": 3, 00:15:03.984 "num_base_bdevs_discovered": 0, 00:15:03.984 "num_base_bdevs_operational": 3, 00:15:03.984 "base_bdevs_list": [ 00:15:03.984 { 00:15:03.984 "name": "BaseBdev1", 00:15:03.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.984 "is_configured": false, 00:15:03.984 "data_offset": 0, 00:15:03.984 "data_size": 0 00:15:03.984 }, 00:15:03.984 { 00:15:03.984 "name": "BaseBdev2", 00:15:03.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.984 "is_configured": false, 00:15:03.984 "data_offset": 0, 00:15:03.984 "data_size": 0 00:15:03.984 }, 00:15:03.984 { 00:15:03.984 "name": "BaseBdev3", 00:15:03.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.984 "is_configured": false, 00:15:03.984 "data_offset": 0, 00:15:03.984 "data_size": 0 00:15:03.984 } 00:15:03.984 ] 00:15:03.984 }' 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.984 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 12:41:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:04.555 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.555 12:41:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 [2024-12-14 12:41:03.998054] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.555 [2024-12-14 12:41:03.998148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 [2024-12-14 12:41:04.010028] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:04.555 [2024-12-14 12:41:04.010118] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:04.555 [2024-12-14 12:41:04.010151] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.555 [2024-12-14 12:41:04.010191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.555 [2024-12-14 12:41:04.010242] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.555 [2024-12-14 12:41:04.010265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 [2024-12-14 12:41:04.055130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.555 BaseBdev1 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.555 [ 00:15:04.555 { 00:15:04.555 "name": "BaseBdev1", 00:15:04.555 "aliases": [ 00:15:04.555 "2b22568a-9467-4aaf-9602-511ba3123c4b" 00:15:04.555 ], 00:15:04.555 "product_name": "Malloc disk", 00:15:04.555 "block_size": 512, 00:15:04.555 "num_blocks": 65536, 00:15:04.555 "uuid": "2b22568a-9467-4aaf-9602-511ba3123c4b", 00:15:04.555 "assigned_rate_limits": { 00:15:04.555 "rw_ios_per_sec": 0, 00:15:04.555 "rw_mbytes_per_sec": 0, 00:15:04.555 "r_mbytes_per_sec": 0, 00:15:04.555 "w_mbytes_per_sec": 0 00:15:04.555 }, 00:15:04.555 "claimed": true, 00:15:04.555 "claim_type": "exclusive_write", 00:15:04.555 "zoned": false, 00:15:04.555 "supported_io_types": { 00:15:04.555 "read": true, 00:15:04.555 "write": true, 00:15:04.555 "unmap": true, 00:15:04.555 "flush": true, 00:15:04.555 "reset": true, 00:15:04.555 "nvme_admin": false, 00:15:04.555 "nvme_io": false, 00:15:04.555 "nvme_io_md": false, 00:15:04.555 "write_zeroes": true, 00:15:04.555 "zcopy": true, 00:15:04.555 "get_zone_info": false, 00:15:04.555 "zone_management": false, 00:15:04.555 "zone_append": false, 00:15:04.555 "compare": false, 00:15:04.555 "compare_and_write": false, 00:15:04.555 "abort": true, 00:15:04.555 "seek_hole": false, 00:15:04.555 "seek_data": false, 00:15:04.555 "copy": true, 00:15:04.555 "nvme_iov_md": false 00:15:04.555 }, 00:15:04.555 "memory_domains": [ 00:15:04.555 { 00:15:04.555 "dma_device_id": "system", 00:15:04.555 "dma_device_type": 1 00:15:04.555 }, 00:15:04.555 { 00:15:04.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:04.555 "dma_device_type": 2 00:15:04.555 } 00:15:04.555 ], 00:15:04.555 "driver_specific": {} 00:15:04.555 } 00:15:04.555 ] 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.555 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.556 "name": "Existed_Raid", 00:15:04.556 "uuid": "42101a32-fc6a-49b2-a9b8-0e2cfaa24094", 00:15:04.556 "strip_size_kb": 64, 00:15:04.556 "state": "configuring", 00:15:04.556 "raid_level": "raid5f", 00:15:04.556 "superblock": true, 00:15:04.556 "num_base_bdevs": 3, 00:15:04.556 "num_base_bdevs_discovered": 1, 00:15:04.556 "num_base_bdevs_operational": 3, 00:15:04.556 "base_bdevs_list": [ 00:15:04.556 { 00:15:04.556 "name": "BaseBdev1", 00:15:04.556 "uuid": "2b22568a-9467-4aaf-9602-511ba3123c4b", 00:15:04.556 "is_configured": true, 00:15:04.556 "data_offset": 2048, 00:15:04.556 "data_size": 63488 00:15:04.556 }, 00:15:04.556 { 00:15:04.556 "name": "BaseBdev2", 00:15:04.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.556 "is_configured": false, 00:15:04.556 "data_offset": 0, 00:15:04.556 "data_size": 0 00:15:04.556 }, 00:15:04.556 { 00:15:04.556 "name": "BaseBdev3", 00:15:04.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.556 "is_configured": false, 00:15:04.556 "data_offset": 0, 00:15:04.556 "data_size": 0 00:15:04.556 } 00:15:04.556 ] 00:15:04.556 }' 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.556 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.816 [2024-12-14 12:41:04.498444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:04.816 [2024-12-14 12:41:04.498514] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.816 [2024-12-14 12:41:04.506494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.816 [2024-12-14 12:41:04.508324] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:04.816 [2024-12-14 12:41:04.508364] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:04.816 [2024-12-14 12:41:04.508374] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:04.816 [2024-12-14 12:41:04.508382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:04.816 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.076 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.076 "name": "Existed_Raid", 00:15:05.076 "uuid": "505a3d21-e74a-45bf-8ac5-1262b8978f9f", 00:15:05.076 "strip_size_kb": 64, 00:15:05.076 "state": "configuring", 00:15:05.076 "raid_level": "raid5f", 00:15:05.076 "superblock": true, 00:15:05.076 "num_base_bdevs": 3, 00:15:05.076 "num_base_bdevs_discovered": 1, 00:15:05.076 "num_base_bdevs_operational": 3, 00:15:05.076 "base_bdevs_list": [ 00:15:05.076 { 00:15:05.076 "name": "BaseBdev1", 00:15:05.076 "uuid": "2b22568a-9467-4aaf-9602-511ba3123c4b", 00:15:05.076 "is_configured": true, 00:15:05.076 "data_offset": 2048, 00:15:05.076 "data_size": 63488 00:15:05.076 }, 00:15:05.076 { 00:15:05.076 "name": "BaseBdev2", 00:15:05.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.076 "is_configured": false, 00:15:05.076 "data_offset": 0, 00:15:05.076 "data_size": 0 00:15:05.076 }, 00:15:05.076 { 00:15:05.076 "name": "BaseBdev3", 00:15:05.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.076 "is_configured": false, 00:15:05.076 "data_offset": 0, 00:15:05.076 "data_size": 0 00:15:05.076 } 00:15:05.076 ] 00:15:05.076 }' 00:15:05.076 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.076 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.336 [2024-12-14 12:41:04.987195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:05.336 BaseBdev2 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:05.336 12:41:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.336 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.336 [ 00:15:05.336 { 00:15:05.336 "name": "BaseBdev2", 00:15:05.336 "aliases": [ 00:15:05.336 "6581e306-dd1a-4a9f-aaa4-a4a2814b8e18" 00:15:05.336 ], 00:15:05.336 "product_name": "Malloc disk", 00:15:05.336 "block_size": 512, 00:15:05.336 "num_blocks": 65536, 00:15:05.336 "uuid": "6581e306-dd1a-4a9f-aaa4-a4a2814b8e18", 00:15:05.336 "assigned_rate_limits": { 00:15:05.336 "rw_ios_per_sec": 0, 00:15:05.336 "rw_mbytes_per_sec": 0, 00:15:05.336 "r_mbytes_per_sec": 0, 00:15:05.336 "w_mbytes_per_sec": 0 00:15:05.337 }, 00:15:05.337 "claimed": true, 00:15:05.337 "claim_type": "exclusive_write", 00:15:05.337 "zoned": false, 00:15:05.337 "supported_io_types": { 00:15:05.337 "read": true, 00:15:05.337 "write": true, 00:15:05.337 "unmap": true, 00:15:05.337 "flush": true, 00:15:05.337 "reset": true, 00:15:05.337 "nvme_admin": false, 00:15:05.337 "nvme_io": false, 00:15:05.337 "nvme_io_md": false, 00:15:05.337 "write_zeroes": true, 00:15:05.337 "zcopy": true, 00:15:05.337 "get_zone_info": false, 00:15:05.337 "zone_management": false, 00:15:05.337 "zone_append": false, 00:15:05.337 "compare": false, 00:15:05.337 "compare_and_write": false, 00:15:05.337 "abort": true, 00:15:05.337 "seek_hole": false, 00:15:05.337 "seek_data": false, 00:15:05.337 "copy": true, 00:15:05.337 "nvme_iov_md": false 00:15:05.337 }, 00:15:05.337 "memory_domains": [ 00:15:05.337 { 00:15:05.337 "dma_device_id": "system", 00:15:05.337 "dma_device_type": 1 00:15:05.337 }, 00:15:05.337 { 00:15:05.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.337 "dma_device_type": 2 00:15:05.337 } 00:15:05.337 ], 00:15:05.337 "driver_specific": {} 00:15:05.337 } 00:15:05.337 ] 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.337 "name": "Existed_Raid", 00:15:05.337 "uuid": "505a3d21-e74a-45bf-8ac5-1262b8978f9f", 00:15:05.337 "strip_size_kb": 64, 00:15:05.337 "state": "configuring", 00:15:05.337 "raid_level": "raid5f", 00:15:05.337 "superblock": true, 00:15:05.337 "num_base_bdevs": 3, 00:15:05.337 "num_base_bdevs_discovered": 2, 00:15:05.337 "num_base_bdevs_operational": 3, 00:15:05.337 "base_bdevs_list": [ 00:15:05.337 { 00:15:05.337 "name": "BaseBdev1", 00:15:05.337 "uuid": "2b22568a-9467-4aaf-9602-511ba3123c4b", 00:15:05.337 "is_configured": true, 00:15:05.337 "data_offset": 2048, 00:15:05.337 "data_size": 63488 00:15:05.337 }, 00:15:05.337 { 00:15:05.337 "name": "BaseBdev2", 00:15:05.337 "uuid": "6581e306-dd1a-4a9f-aaa4-a4a2814b8e18", 00:15:05.337 "is_configured": true, 00:15:05.337 "data_offset": 2048, 00:15:05.337 "data_size": 63488 00:15:05.337 }, 00:15:05.337 { 00:15:05.337 "name": "BaseBdev3", 00:15:05.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.337 "is_configured": false, 00:15:05.337 "data_offset": 0, 00:15:05.337 "data_size": 0 00:15:05.337 } 00:15:05.337 ] 00:15:05.337 }' 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.337 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.905 [2024-12-14 12:41:05.519754] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:05.905 [2024-12-14 12:41:05.520006] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:05.905 [2024-12-14 12:41:05.520026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:05.905 [2024-12-14 12:41:05.520316] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:05.905 BaseBdev3 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.905 [2024-12-14 12:41:05.525891] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:05.905 [2024-12-14 12:41:05.525951] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:05.905 [2024-12-14 12:41:05.526172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.905 [ 00:15:05.905 { 00:15:05.905 "name": "BaseBdev3", 00:15:05.905 "aliases": [ 00:15:05.905 "0ada2af2-d94d-4abf-b2bb-4c5757a463f0" 00:15:05.905 ], 00:15:05.905 "product_name": "Malloc disk", 00:15:05.905 "block_size": 512, 00:15:05.905 "num_blocks": 65536, 00:15:05.905 "uuid": "0ada2af2-d94d-4abf-b2bb-4c5757a463f0", 00:15:05.905 "assigned_rate_limits": { 00:15:05.905 "rw_ios_per_sec": 0, 00:15:05.905 "rw_mbytes_per_sec": 0, 00:15:05.905 "r_mbytes_per_sec": 0, 00:15:05.905 "w_mbytes_per_sec": 0 00:15:05.905 }, 00:15:05.905 "claimed": true, 00:15:05.905 "claim_type": "exclusive_write", 00:15:05.905 "zoned": false, 00:15:05.905 "supported_io_types": { 00:15:05.905 "read": true, 00:15:05.905 "write": true, 00:15:05.905 "unmap": true, 00:15:05.905 "flush": true, 00:15:05.905 "reset": true, 00:15:05.905 "nvme_admin": false, 00:15:05.905 "nvme_io": false, 00:15:05.905 "nvme_io_md": false, 00:15:05.905 "write_zeroes": true, 00:15:05.905 "zcopy": true, 00:15:05.905 "get_zone_info": false, 00:15:05.905 "zone_management": false, 00:15:05.905 "zone_append": false, 00:15:05.905 "compare": false, 00:15:05.905 "compare_and_write": false, 00:15:05.905 "abort": true, 00:15:05.905 "seek_hole": false, 00:15:05.905 "seek_data": false, 00:15:05.905 "copy": true, 00:15:05.905 "nvme_iov_md": false 00:15:05.905 }, 00:15:05.905 "memory_domains": [ 00:15:05.905 { 00:15:05.905 "dma_device_id": "system", 00:15:05.905 "dma_device_type": 1 00:15:05.905 }, 00:15:05.905 { 00:15:05.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:05.905 "dma_device_type": 2 00:15:05.905 } 00:15:05.905 ], 00:15:05.905 "driver_specific": {} 00:15:05.905 } 00:15:05.905 ] 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:05.905 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.906 "name": "Existed_Raid", 00:15:05.906 "uuid": "505a3d21-e74a-45bf-8ac5-1262b8978f9f", 00:15:05.906 "strip_size_kb": 64, 00:15:05.906 "state": "online", 00:15:05.906 "raid_level": "raid5f", 00:15:05.906 "superblock": true, 00:15:05.906 "num_base_bdevs": 3, 00:15:05.906 "num_base_bdevs_discovered": 3, 00:15:05.906 "num_base_bdevs_operational": 3, 00:15:05.906 "base_bdevs_list": [ 00:15:05.906 { 00:15:05.906 "name": "BaseBdev1", 00:15:05.906 "uuid": "2b22568a-9467-4aaf-9602-511ba3123c4b", 00:15:05.906 "is_configured": true, 00:15:05.906 "data_offset": 2048, 00:15:05.906 "data_size": 63488 00:15:05.906 }, 00:15:05.906 { 00:15:05.906 "name": "BaseBdev2", 00:15:05.906 "uuid": "6581e306-dd1a-4a9f-aaa4-a4a2814b8e18", 00:15:05.906 "is_configured": true, 00:15:05.906 "data_offset": 2048, 00:15:05.906 "data_size": 63488 00:15:05.906 }, 00:15:05.906 { 00:15:05.906 "name": "BaseBdev3", 00:15:05.906 "uuid": "0ada2af2-d94d-4abf-b2bb-4c5757a463f0", 00:15:05.906 "is_configured": true, 00:15:05.906 "data_offset": 2048, 00:15:05.906 "data_size": 63488 00:15:05.906 } 00:15:05.906 ] 00:15:05.906 }' 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.906 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.475 12:41:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.475 [2024-12-14 12:41:05.991435] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:06.475 "name": "Existed_Raid", 00:15:06.475 "aliases": [ 00:15:06.475 "505a3d21-e74a-45bf-8ac5-1262b8978f9f" 00:15:06.475 ], 00:15:06.475 "product_name": "Raid Volume", 00:15:06.475 "block_size": 512, 00:15:06.475 "num_blocks": 126976, 00:15:06.475 "uuid": "505a3d21-e74a-45bf-8ac5-1262b8978f9f", 00:15:06.475 "assigned_rate_limits": { 00:15:06.475 "rw_ios_per_sec": 0, 00:15:06.475 "rw_mbytes_per_sec": 0, 00:15:06.475 "r_mbytes_per_sec": 0, 00:15:06.475 "w_mbytes_per_sec": 0 00:15:06.475 }, 00:15:06.475 "claimed": false, 00:15:06.475 "zoned": false, 00:15:06.475 "supported_io_types": { 00:15:06.475 "read": true, 00:15:06.475 "write": true, 00:15:06.475 "unmap": false, 00:15:06.475 "flush": false, 00:15:06.475 "reset": true, 00:15:06.475 "nvme_admin": false, 00:15:06.475 "nvme_io": false, 00:15:06.475 "nvme_io_md": false, 00:15:06.475 "write_zeroes": true, 00:15:06.475 "zcopy": false, 00:15:06.475 "get_zone_info": false, 00:15:06.475 "zone_management": false, 00:15:06.475 "zone_append": false, 00:15:06.475 "compare": false, 00:15:06.475 "compare_and_write": false, 00:15:06.475 "abort": false, 00:15:06.475 "seek_hole": false, 00:15:06.475 "seek_data": false, 00:15:06.475 "copy": false, 00:15:06.475 "nvme_iov_md": false 00:15:06.475 }, 00:15:06.475 "driver_specific": { 00:15:06.475 "raid": { 00:15:06.475 "uuid": "505a3d21-e74a-45bf-8ac5-1262b8978f9f", 00:15:06.475 "strip_size_kb": 64, 00:15:06.475 "state": "online", 00:15:06.475 "raid_level": "raid5f", 00:15:06.475 "superblock": true, 00:15:06.475 "num_base_bdevs": 3, 00:15:06.475 "num_base_bdevs_discovered": 3, 00:15:06.475 "num_base_bdevs_operational": 3, 00:15:06.475 "base_bdevs_list": [ 00:15:06.475 { 00:15:06.475 "name": "BaseBdev1", 00:15:06.475 "uuid": "2b22568a-9467-4aaf-9602-511ba3123c4b", 00:15:06.475 "is_configured": true, 00:15:06.475 "data_offset": 2048, 00:15:06.475 "data_size": 63488 00:15:06.475 }, 00:15:06.475 { 00:15:06.475 "name": "BaseBdev2", 00:15:06.475 "uuid": "6581e306-dd1a-4a9f-aaa4-a4a2814b8e18", 00:15:06.475 "is_configured": true, 00:15:06.475 "data_offset": 2048, 00:15:06.475 "data_size": 63488 00:15:06.475 }, 00:15:06.475 { 00:15:06.475 "name": "BaseBdev3", 00:15:06.475 "uuid": "0ada2af2-d94d-4abf-b2bb-4c5757a463f0", 00:15:06.475 "is_configured": true, 00:15:06.475 "data_offset": 2048, 00:15:06.475 "data_size": 63488 00:15:06.475 } 00:15:06.475 ] 00:15:06.475 } 00:15:06.475 } 00:15:06.475 }' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:06.475 BaseBdev2 00:15:06.475 BaseBdev3' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.475 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.476 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:06.476 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:06.476 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:06.476 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.476 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.735 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.736 [2024-12-14 12:41:06.250826] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.736 "name": "Existed_Raid", 00:15:06.736 "uuid": "505a3d21-e74a-45bf-8ac5-1262b8978f9f", 00:15:06.736 "strip_size_kb": 64, 00:15:06.736 "state": "online", 00:15:06.736 "raid_level": "raid5f", 00:15:06.736 "superblock": true, 00:15:06.736 "num_base_bdevs": 3, 00:15:06.736 "num_base_bdevs_discovered": 2, 00:15:06.736 "num_base_bdevs_operational": 2, 00:15:06.736 "base_bdevs_list": [ 00:15:06.736 { 00:15:06.736 "name": null, 00:15:06.736 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.736 "is_configured": false, 00:15:06.736 "data_offset": 0, 00:15:06.736 "data_size": 63488 00:15:06.736 }, 00:15:06.736 { 00:15:06.736 "name": "BaseBdev2", 00:15:06.736 "uuid": "6581e306-dd1a-4a9f-aaa4-a4a2814b8e18", 00:15:06.736 "is_configured": true, 00:15:06.736 "data_offset": 2048, 00:15:06.736 "data_size": 63488 00:15:06.736 }, 00:15:06.736 { 00:15:06.736 "name": "BaseBdev3", 00:15:06.736 "uuid": "0ada2af2-d94d-4abf-b2bb-4c5757a463f0", 00:15:06.736 "is_configured": true, 00:15:06.736 "data_offset": 2048, 00:15:06.736 "data_size": 63488 00:15:06.736 } 00:15:06.736 ] 00:15:06.736 }' 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.736 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.305 [2024-12-14 12:41:06.828687] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:07.305 [2024-12-14 12:41:06.828831] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.305 [2024-12-14 12:41:06.919454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.305 12:41:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.305 [2024-12-14 12:41:06.979392] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:07.305 [2024-12-14 12:41:06.979500] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.566 BaseBdev2 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.566 [ 00:15:07.566 { 00:15:07.566 "name": "BaseBdev2", 00:15:07.566 "aliases": [ 00:15:07.566 "438e1a7e-c1cb-4859-89f9-23dffe5acc03" 00:15:07.566 ], 00:15:07.566 "product_name": "Malloc disk", 00:15:07.566 "block_size": 512, 00:15:07.566 "num_blocks": 65536, 00:15:07.566 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:07.566 "assigned_rate_limits": { 00:15:07.566 "rw_ios_per_sec": 0, 00:15:07.566 "rw_mbytes_per_sec": 0, 00:15:07.566 "r_mbytes_per_sec": 0, 00:15:07.566 "w_mbytes_per_sec": 0 00:15:07.566 }, 00:15:07.566 "claimed": false, 00:15:07.566 "zoned": false, 00:15:07.566 "supported_io_types": { 00:15:07.566 "read": true, 00:15:07.566 "write": true, 00:15:07.566 "unmap": true, 00:15:07.566 "flush": true, 00:15:07.566 "reset": true, 00:15:07.566 "nvme_admin": false, 00:15:07.566 "nvme_io": false, 00:15:07.566 "nvme_io_md": false, 00:15:07.566 "write_zeroes": true, 00:15:07.566 "zcopy": true, 00:15:07.566 "get_zone_info": false, 00:15:07.566 "zone_management": false, 00:15:07.566 "zone_append": false, 00:15:07.566 "compare": false, 00:15:07.566 "compare_and_write": false, 00:15:07.566 "abort": true, 00:15:07.566 "seek_hole": false, 00:15:07.566 "seek_data": false, 00:15:07.566 "copy": true, 00:15:07.566 "nvme_iov_md": false 00:15:07.566 }, 00:15:07.566 "memory_domains": [ 00:15:07.566 { 00:15:07.566 "dma_device_id": "system", 00:15:07.566 "dma_device_type": 1 00:15:07.566 }, 00:15:07.566 { 00:15:07.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.566 "dma_device_type": 2 00:15:07.566 } 00:15:07.566 ], 00:15:07.566 "driver_specific": {} 00:15:07.566 } 00:15:07.566 ] 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:07.566 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.567 BaseBdev3 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.567 [ 00:15:07.567 { 00:15:07.567 "name": "BaseBdev3", 00:15:07.567 "aliases": [ 00:15:07.567 "57312ae6-e003-4692-b969-b95d41fef9d7" 00:15:07.567 ], 00:15:07.567 "product_name": "Malloc disk", 00:15:07.567 "block_size": 512, 00:15:07.567 "num_blocks": 65536, 00:15:07.567 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:07.567 "assigned_rate_limits": { 00:15:07.567 "rw_ios_per_sec": 0, 00:15:07.567 "rw_mbytes_per_sec": 0, 00:15:07.567 "r_mbytes_per_sec": 0, 00:15:07.567 "w_mbytes_per_sec": 0 00:15:07.567 }, 00:15:07.567 "claimed": false, 00:15:07.567 "zoned": false, 00:15:07.567 "supported_io_types": { 00:15:07.567 "read": true, 00:15:07.567 "write": true, 00:15:07.567 "unmap": true, 00:15:07.567 "flush": true, 00:15:07.567 "reset": true, 00:15:07.567 "nvme_admin": false, 00:15:07.567 "nvme_io": false, 00:15:07.567 "nvme_io_md": false, 00:15:07.567 "write_zeroes": true, 00:15:07.567 "zcopy": true, 00:15:07.567 "get_zone_info": false, 00:15:07.567 "zone_management": false, 00:15:07.567 "zone_append": false, 00:15:07.567 "compare": false, 00:15:07.567 "compare_and_write": false, 00:15:07.567 "abort": true, 00:15:07.567 "seek_hole": false, 00:15:07.567 "seek_data": false, 00:15:07.567 "copy": true, 00:15:07.567 "nvme_iov_md": false 00:15:07.567 }, 00:15:07.567 "memory_domains": [ 00:15:07.567 { 00:15:07.567 "dma_device_id": "system", 00:15:07.567 "dma_device_type": 1 00:15:07.567 }, 00:15:07.567 { 00:15:07.567 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.567 "dma_device_type": 2 00:15:07.567 } 00:15:07.567 ], 00:15:07.567 "driver_specific": {} 00:15:07.567 } 00:15:07.567 ] 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.567 [2024-12-14 12:41:07.284647] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:07.567 [2024-12-14 12:41:07.284687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:07.567 [2024-12-14 12:41:07.284706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.567 [2024-12-14 12:41:07.286456] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.567 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.827 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.827 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.827 "name": "Existed_Raid", 00:15:07.827 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:07.827 "strip_size_kb": 64, 00:15:07.827 "state": "configuring", 00:15:07.827 "raid_level": "raid5f", 00:15:07.827 "superblock": true, 00:15:07.827 "num_base_bdevs": 3, 00:15:07.827 "num_base_bdevs_discovered": 2, 00:15:07.827 "num_base_bdevs_operational": 3, 00:15:07.827 "base_bdevs_list": [ 00:15:07.827 { 00:15:07.827 "name": "BaseBdev1", 00:15:07.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.827 "is_configured": false, 00:15:07.827 "data_offset": 0, 00:15:07.827 "data_size": 0 00:15:07.827 }, 00:15:07.827 { 00:15:07.827 "name": "BaseBdev2", 00:15:07.827 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:07.827 "is_configured": true, 00:15:07.827 "data_offset": 2048, 00:15:07.827 "data_size": 63488 00:15:07.827 }, 00:15:07.827 { 00:15:07.827 "name": "BaseBdev3", 00:15:07.827 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:07.827 "is_configured": true, 00:15:07.827 "data_offset": 2048, 00:15:07.827 "data_size": 63488 00:15:07.827 } 00:15:07.827 ] 00:15:07.827 }' 00:15:07.827 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.827 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.087 [2024-12-14 12:41:07.675996] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.087 "name": "Existed_Raid", 00:15:08.087 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:08.087 "strip_size_kb": 64, 00:15:08.087 "state": "configuring", 00:15:08.087 "raid_level": "raid5f", 00:15:08.087 "superblock": true, 00:15:08.087 "num_base_bdevs": 3, 00:15:08.087 "num_base_bdevs_discovered": 1, 00:15:08.087 "num_base_bdevs_operational": 3, 00:15:08.087 "base_bdevs_list": [ 00:15:08.087 { 00:15:08.087 "name": "BaseBdev1", 00:15:08.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.087 "is_configured": false, 00:15:08.087 "data_offset": 0, 00:15:08.087 "data_size": 0 00:15:08.087 }, 00:15:08.087 { 00:15:08.087 "name": null, 00:15:08.087 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:08.087 "is_configured": false, 00:15:08.087 "data_offset": 0, 00:15:08.087 "data_size": 63488 00:15:08.087 }, 00:15:08.087 { 00:15:08.087 "name": "BaseBdev3", 00:15:08.087 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:08.087 "is_configured": true, 00:15:08.087 "data_offset": 2048, 00:15:08.087 "data_size": 63488 00:15:08.087 } 00:15:08.087 ] 00:15:08.087 }' 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.087 12:41:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.347 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.347 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:08.347 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.347 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.347 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 [2024-12-14 12:41:08.144291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:08.607 BaseBdev1 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 [ 00:15:08.607 { 00:15:08.607 "name": "BaseBdev1", 00:15:08.607 "aliases": [ 00:15:08.607 "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5" 00:15:08.607 ], 00:15:08.607 "product_name": "Malloc disk", 00:15:08.607 "block_size": 512, 00:15:08.607 "num_blocks": 65536, 00:15:08.607 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:08.607 "assigned_rate_limits": { 00:15:08.607 "rw_ios_per_sec": 0, 00:15:08.607 "rw_mbytes_per_sec": 0, 00:15:08.607 "r_mbytes_per_sec": 0, 00:15:08.607 "w_mbytes_per_sec": 0 00:15:08.607 }, 00:15:08.607 "claimed": true, 00:15:08.607 "claim_type": "exclusive_write", 00:15:08.607 "zoned": false, 00:15:08.607 "supported_io_types": { 00:15:08.607 "read": true, 00:15:08.607 "write": true, 00:15:08.607 "unmap": true, 00:15:08.607 "flush": true, 00:15:08.607 "reset": true, 00:15:08.607 "nvme_admin": false, 00:15:08.607 "nvme_io": false, 00:15:08.607 "nvme_io_md": false, 00:15:08.607 "write_zeroes": true, 00:15:08.607 "zcopy": true, 00:15:08.607 "get_zone_info": false, 00:15:08.607 "zone_management": false, 00:15:08.607 "zone_append": false, 00:15:08.607 "compare": false, 00:15:08.607 "compare_and_write": false, 00:15:08.607 "abort": true, 00:15:08.607 "seek_hole": false, 00:15:08.607 "seek_data": false, 00:15:08.607 "copy": true, 00:15:08.607 "nvme_iov_md": false 00:15:08.607 }, 00:15:08.607 "memory_domains": [ 00:15:08.607 { 00:15:08.607 "dma_device_id": "system", 00:15:08.607 "dma_device_type": 1 00:15:08.607 }, 00:15:08.607 { 00:15:08.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.607 "dma_device_type": 2 00:15:08.607 } 00:15:08.607 ], 00:15:08.607 "driver_specific": {} 00:15:08.607 } 00:15:08.607 ] 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.607 "name": "Existed_Raid", 00:15:08.607 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:08.607 "strip_size_kb": 64, 00:15:08.607 "state": "configuring", 00:15:08.607 "raid_level": "raid5f", 00:15:08.607 "superblock": true, 00:15:08.607 "num_base_bdevs": 3, 00:15:08.607 "num_base_bdevs_discovered": 2, 00:15:08.607 "num_base_bdevs_operational": 3, 00:15:08.607 "base_bdevs_list": [ 00:15:08.607 { 00:15:08.607 "name": "BaseBdev1", 00:15:08.607 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:08.607 "is_configured": true, 00:15:08.607 "data_offset": 2048, 00:15:08.607 "data_size": 63488 00:15:08.607 }, 00:15:08.607 { 00:15:08.607 "name": null, 00:15:08.607 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:08.607 "is_configured": false, 00:15:08.607 "data_offset": 0, 00:15:08.607 "data_size": 63488 00:15:08.607 }, 00:15:08.607 { 00:15:08.607 "name": "BaseBdev3", 00:15:08.607 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:08.607 "is_configured": true, 00:15:08.607 "data_offset": 2048, 00:15:08.607 "data_size": 63488 00:15:08.607 } 00:15:08.607 ] 00:15:08.607 }' 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.607 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.867 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.867 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:08.867 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.867 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.127 [2024-12-14 12:41:08.647470] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.127 "name": "Existed_Raid", 00:15:09.127 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:09.127 "strip_size_kb": 64, 00:15:09.127 "state": "configuring", 00:15:09.127 "raid_level": "raid5f", 00:15:09.127 "superblock": true, 00:15:09.127 "num_base_bdevs": 3, 00:15:09.127 "num_base_bdevs_discovered": 1, 00:15:09.127 "num_base_bdevs_operational": 3, 00:15:09.127 "base_bdevs_list": [ 00:15:09.127 { 00:15:09.127 "name": "BaseBdev1", 00:15:09.127 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:09.127 "is_configured": true, 00:15:09.127 "data_offset": 2048, 00:15:09.127 "data_size": 63488 00:15:09.127 }, 00:15:09.127 { 00:15:09.127 "name": null, 00:15:09.127 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:09.127 "is_configured": false, 00:15:09.127 "data_offset": 0, 00:15:09.127 "data_size": 63488 00:15:09.127 }, 00:15:09.127 { 00:15:09.127 "name": null, 00:15:09.127 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:09.127 "is_configured": false, 00:15:09.127 "data_offset": 0, 00:15:09.127 "data_size": 63488 00:15:09.127 } 00:15:09.127 ] 00:15:09.127 }' 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.127 12:41:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.387 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.387 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:09.387 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.387 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.387 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.646 [2024-12-14 12:41:09.146651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.646 "name": "Existed_Raid", 00:15:09.646 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:09.646 "strip_size_kb": 64, 00:15:09.646 "state": "configuring", 00:15:09.646 "raid_level": "raid5f", 00:15:09.646 "superblock": true, 00:15:09.646 "num_base_bdevs": 3, 00:15:09.646 "num_base_bdevs_discovered": 2, 00:15:09.646 "num_base_bdevs_operational": 3, 00:15:09.646 "base_bdevs_list": [ 00:15:09.646 { 00:15:09.646 "name": "BaseBdev1", 00:15:09.646 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:09.646 "is_configured": true, 00:15:09.646 "data_offset": 2048, 00:15:09.646 "data_size": 63488 00:15:09.646 }, 00:15:09.646 { 00:15:09.646 "name": null, 00:15:09.646 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:09.646 "is_configured": false, 00:15:09.646 "data_offset": 0, 00:15:09.646 "data_size": 63488 00:15:09.646 }, 00:15:09.646 { 00:15:09.646 "name": "BaseBdev3", 00:15:09.646 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:09.646 "is_configured": true, 00:15:09.646 "data_offset": 2048, 00:15:09.646 "data_size": 63488 00:15:09.646 } 00:15:09.646 ] 00:15:09.646 }' 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.646 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.906 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.906 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.906 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.906 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:09.906 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.165 [2024-12-14 12:41:09.657796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.165 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.166 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.166 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.166 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.166 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.166 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.166 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.166 "name": "Existed_Raid", 00:15:10.166 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:10.166 "strip_size_kb": 64, 00:15:10.166 "state": "configuring", 00:15:10.166 "raid_level": "raid5f", 00:15:10.166 "superblock": true, 00:15:10.166 "num_base_bdevs": 3, 00:15:10.166 "num_base_bdevs_discovered": 1, 00:15:10.166 "num_base_bdevs_operational": 3, 00:15:10.166 "base_bdevs_list": [ 00:15:10.166 { 00:15:10.166 "name": null, 00:15:10.166 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:10.166 "is_configured": false, 00:15:10.166 "data_offset": 0, 00:15:10.166 "data_size": 63488 00:15:10.166 }, 00:15:10.166 { 00:15:10.166 "name": null, 00:15:10.166 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:10.166 "is_configured": false, 00:15:10.166 "data_offset": 0, 00:15:10.166 "data_size": 63488 00:15:10.166 }, 00:15:10.166 { 00:15:10.166 "name": "BaseBdev3", 00:15:10.166 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:10.166 "is_configured": true, 00:15:10.166 "data_offset": 2048, 00:15:10.166 "data_size": 63488 00:15:10.166 } 00:15:10.166 ] 00:15:10.166 }' 00:15:10.166 12:41:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.166 12:41:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.425 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:10.425 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.425 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.685 [2024-12-14 12:41:10.190314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.685 "name": "Existed_Raid", 00:15:10.685 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:10.685 "strip_size_kb": 64, 00:15:10.685 "state": "configuring", 00:15:10.685 "raid_level": "raid5f", 00:15:10.685 "superblock": true, 00:15:10.685 "num_base_bdevs": 3, 00:15:10.685 "num_base_bdevs_discovered": 2, 00:15:10.685 "num_base_bdevs_operational": 3, 00:15:10.685 "base_bdevs_list": [ 00:15:10.685 { 00:15:10.685 "name": null, 00:15:10.685 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:10.685 "is_configured": false, 00:15:10.685 "data_offset": 0, 00:15:10.685 "data_size": 63488 00:15:10.685 }, 00:15:10.685 { 00:15:10.685 "name": "BaseBdev2", 00:15:10.685 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:10.685 "is_configured": true, 00:15:10.685 "data_offset": 2048, 00:15:10.685 "data_size": 63488 00:15:10.685 }, 00:15:10.685 { 00:15:10.685 "name": "BaseBdev3", 00:15:10.685 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:10.685 "is_configured": true, 00:15:10.685 "data_offset": 2048, 00:15:10.685 "data_size": 63488 00:15:10.685 } 00:15:10.685 ] 00:15:10.685 }' 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.685 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 [2024-12-14 12:41:10.801425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:11.254 [2024-12-14 12:41:10.801659] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:11.254 [2024-12-14 12:41:10.801676] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:11.254 [2024-12-14 12:41:10.801921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:11.254 NewBaseBdev 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 [2024-12-14 12:41:10.807142] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:11.254 [2024-12-14 12:41:10.807165] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:11.254 [2024-12-14 12:41:10.807325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.254 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.254 [ 00:15:11.254 { 00:15:11.254 "name": "NewBaseBdev", 00:15:11.254 "aliases": [ 00:15:11.254 "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5" 00:15:11.254 ], 00:15:11.254 "product_name": "Malloc disk", 00:15:11.254 "block_size": 512, 00:15:11.254 "num_blocks": 65536, 00:15:11.254 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:11.254 "assigned_rate_limits": { 00:15:11.254 "rw_ios_per_sec": 0, 00:15:11.254 "rw_mbytes_per_sec": 0, 00:15:11.254 "r_mbytes_per_sec": 0, 00:15:11.254 "w_mbytes_per_sec": 0 00:15:11.254 }, 00:15:11.254 "claimed": true, 00:15:11.254 "claim_type": "exclusive_write", 00:15:11.254 "zoned": false, 00:15:11.254 "supported_io_types": { 00:15:11.254 "read": true, 00:15:11.254 "write": true, 00:15:11.254 "unmap": true, 00:15:11.254 "flush": true, 00:15:11.254 "reset": true, 00:15:11.254 "nvme_admin": false, 00:15:11.254 "nvme_io": false, 00:15:11.254 "nvme_io_md": false, 00:15:11.254 "write_zeroes": true, 00:15:11.254 "zcopy": true, 00:15:11.254 "get_zone_info": false, 00:15:11.254 "zone_management": false, 00:15:11.254 "zone_append": false, 00:15:11.254 "compare": false, 00:15:11.254 "compare_and_write": false, 00:15:11.254 "abort": true, 00:15:11.254 "seek_hole": false, 00:15:11.254 "seek_data": false, 00:15:11.254 "copy": true, 00:15:11.254 "nvme_iov_md": false 00:15:11.254 }, 00:15:11.255 "memory_domains": [ 00:15:11.255 { 00:15:11.255 "dma_device_id": "system", 00:15:11.255 "dma_device_type": 1 00:15:11.255 }, 00:15:11.255 { 00:15:11.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.255 "dma_device_type": 2 00:15:11.255 } 00:15:11.255 ], 00:15:11.255 "driver_specific": {} 00:15:11.255 } 00:15:11.255 ] 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.255 "name": "Existed_Raid", 00:15:11.255 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:11.255 "strip_size_kb": 64, 00:15:11.255 "state": "online", 00:15:11.255 "raid_level": "raid5f", 00:15:11.255 "superblock": true, 00:15:11.255 "num_base_bdevs": 3, 00:15:11.255 "num_base_bdevs_discovered": 3, 00:15:11.255 "num_base_bdevs_operational": 3, 00:15:11.255 "base_bdevs_list": [ 00:15:11.255 { 00:15:11.255 "name": "NewBaseBdev", 00:15:11.255 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:11.255 "is_configured": true, 00:15:11.255 "data_offset": 2048, 00:15:11.255 "data_size": 63488 00:15:11.255 }, 00:15:11.255 { 00:15:11.255 "name": "BaseBdev2", 00:15:11.255 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:11.255 "is_configured": true, 00:15:11.255 "data_offset": 2048, 00:15:11.255 "data_size": 63488 00:15:11.255 }, 00:15:11.255 { 00:15:11.255 "name": "BaseBdev3", 00:15:11.255 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:11.255 "is_configured": true, 00:15:11.255 "data_offset": 2048, 00:15:11.255 "data_size": 63488 00:15:11.255 } 00:15:11.255 ] 00:15:11.255 }' 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.255 12:41:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.824 [2024-12-14 12:41:11.316701] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:11.824 "name": "Existed_Raid", 00:15:11.824 "aliases": [ 00:15:11.824 "7c326021-b29a-47d7-aaf5-77663ffffb0e" 00:15:11.824 ], 00:15:11.824 "product_name": "Raid Volume", 00:15:11.824 "block_size": 512, 00:15:11.824 "num_blocks": 126976, 00:15:11.824 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:11.824 "assigned_rate_limits": { 00:15:11.824 "rw_ios_per_sec": 0, 00:15:11.824 "rw_mbytes_per_sec": 0, 00:15:11.824 "r_mbytes_per_sec": 0, 00:15:11.824 "w_mbytes_per_sec": 0 00:15:11.824 }, 00:15:11.824 "claimed": false, 00:15:11.824 "zoned": false, 00:15:11.824 "supported_io_types": { 00:15:11.824 "read": true, 00:15:11.824 "write": true, 00:15:11.824 "unmap": false, 00:15:11.824 "flush": false, 00:15:11.824 "reset": true, 00:15:11.824 "nvme_admin": false, 00:15:11.824 "nvme_io": false, 00:15:11.824 "nvme_io_md": false, 00:15:11.824 "write_zeroes": true, 00:15:11.824 "zcopy": false, 00:15:11.824 "get_zone_info": false, 00:15:11.824 "zone_management": false, 00:15:11.824 "zone_append": false, 00:15:11.824 "compare": false, 00:15:11.824 "compare_and_write": false, 00:15:11.824 "abort": false, 00:15:11.824 "seek_hole": false, 00:15:11.824 "seek_data": false, 00:15:11.824 "copy": false, 00:15:11.824 "nvme_iov_md": false 00:15:11.824 }, 00:15:11.824 "driver_specific": { 00:15:11.824 "raid": { 00:15:11.824 "uuid": "7c326021-b29a-47d7-aaf5-77663ffffb0e", 00:15:11.824 "strip_size_kb": 64, 00:15:11.824 "state": "online", 00:15:11.824 "raid_level": "raid5f", 00:15:11.824 "superblock": true, 00:15:11.824 "num_base_bdevs": 3, 00:15:11.824 "num_base_bdevs_discovered": 3, 00:15:11.824 "num_base_bdevs_operational": 3, 00:15:11.824 "base_bdevs_list": [ 00:15:11.824 { 00:15:11.824 "name": "NewBaseBdev", 00:15:11.824 "uuid": "29b6b6ff-3c15-43b5-927c-3e1c4efbd6e5", 00:15:11.824 "is_configured": true, 00:15:11.824 "data_offset": 2048, 00:15:11.824 "data_size": 63488 00:15:11.824 }, 00:15:11.824 { 00:15:11.824 "name": "BaseBdev2", 00:15:11.824 "uuid": "438e1a7e-c1cb-4859-89f9-23dffe5acc03", 00:15:11.824 "is_configured": true, 00:15:11.824 "data_offset": 2048, 00:15:11.824 "data_size": 63488 00:15:11.824 }, 00:15:11.824 { 00:15:11.824 "name": "BaseBdev3", 00:15:11.824 "uuid": "57312ae6-e003-4692-b969-b95d41fef9d7", 00:15:11.824 "is_configured": true, 00:15:11.824 "data_offset": 2048, 00:15:11.824 "data_size": 63488 00:15:11.824 } 00:15:11.824 ] 00:15:11.824 } 00:15:11.824 } 00:15:11.824 }' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:11.824 BaseBdev2 00:15:11.824 BaseBdev3' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.824 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.085 [2024-12-14 12:41:11.611940] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:12.085 [2024-12-14 12:41:11.611972] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:12.085 [2024-12-14 12:41:11.612058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:12.085 [2024-12-14 12:41:11.612349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:12.085 [2024-12-14 12:41:11.612371] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 82264 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82264 ']' 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 82264 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82264 00:15:12.085 killing process with pid 82264 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82264' 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 82264 00:15:12.085 [2024-12-14 12:41:11.660827] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:12.085 12:41:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 82264 00:15:12.345 [2024-12-14 12:41:11.946084] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.285 12:41:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:13.285 00:15:13.285 real 0m10.326s 00:15:13.285 user 0m16.486s 00:15:13.285 sys 0m1.825s 00:15:13.285 12:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.285 12:41:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.285 ************************************ 00:15:13.285 END TEST raid5f_state_function_test_sb 00:15:13.285 ************************************ 00:15:13.545 12:41:13 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:13.545 12:41:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:13.545 12:41:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.545 12:41:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.545 ************************************ 00:15:13.545 START TEST raid5f_superblock_test 00:15:13.545 ************************************ 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=82879 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 82879 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 82879 ']' 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.545 12:41:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.545 [2024-12-14 12:41:13.177599] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:13.545 [2024-12-14 12:41:13.177719] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82879 ] 00:15:13.805 [2024-12-14 12:41:13.348666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.805 [2024-12-14 12:41:13.457637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.065 [2024-12-14 12:41:13.648187] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.065 [2024-12-14 12:41:13.648245] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.325 malloc1 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.325 [2024-12-14 12:41:14.052665] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:14.325 [2024-12-14 12:41:14.052722] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.325 [2024-12-14 12:41:14.052758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:14.325 [2024-12-14 12:41:14.052767] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.325 [2024-12-14 12:41:14.054853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.325 [2024-12-14 12:41:14.054888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:14.325 pt1 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.325 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.585 malloc2 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.585 [2024-12-14 12:41:14.103899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:14.585 [2024-12-14 12:41:14.103950] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.585 [2024-12-14 12:41:14.103986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:14.585 [2024-12-14 12:41:14.103995] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.585 [2024-12-14 12:41:14.106001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.585 [2024-12-14 12:41:14.106035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:14.585 pt2 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.585 malloc3 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.585 [2024-12-14 12:41:14.167098] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:14.585 [2024-12-14 12:41:14.167143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.585 [2024-12-14 12:41:14.167164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:14.585 [2024-12-14 12:41:14.167172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.585 [2024-12-14 12:41:14.169225] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.585 [2024-12-14 12:41:14.169256] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:14.585 pt3 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:14.585 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.586 [2024-12-14 12:41:14.179125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:14.586 [2024-12-14 12:41:14.180869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:14.586 [2024-12-14 12:41:14.180935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:14.586 [2024-12-14 12:41:14.181101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:14.586 [2024-12-14 12:41:14.181121] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:14.586 [2024-12-14 12:41:14.181355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:14.586 [2024-12-14 12:41:14.186793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:14.586 [2024-12-14 12:41:14.186814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:14.586 [2024-12-14 12:41:14.186989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.586 "name": "raid_bdev1", 00:15:14.586 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:14.586 "strip_size_kb": 64, 00:15:14.586 "state": "online", 00:15:14.586 "raid_level": "raid5f", 00:15:14.586 "superblock": true, 00:15:14.586 "num_base_bdevs": 3, 00:15:14.586 "num_base_bdevs_discovered": 3, 00:15:14.586 "num_base_bdevs_operational": 3, 00:15:14.586 "base_bdevs_list": [ 00:15:14.586 { 00:15:14.586 "name": "pt1", 00:15:14.586 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:14.586 "is_configured": true, 00:15:14.586 "data_offset": 2048, 00:15:14.586 "data_size": 63488 00:15:14.586 }, 00:15:14.586 { 00:15:14.586 "name": "pt2", 00:15:14.586 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.586 "is_configured": true, 00:15:14.586 "data_offset": 2048, 00:15:14.586 "data_size": 63488 00:15:14.586 }, 00:15:14.586 { 00:15:14.586 "name": "pt3", 00:15:14.586 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.586 "is_configured": true, 00:15:14.586 "data_offset": 2048, 00:15:14.586 "data_size": 63488 00:15:14.586 } 00:15:14.586 ] 00:15:14.586 }' 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.586 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.155 [2024-12-14 12:41:14.636589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.155 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:15.155 "name": "raid_bdev1", 00:15:15.155 "aliases": [ 00:15:15.155 "87def66e-07fb-468a-9d40-6ce81c8360c8" 00:15:15.155 ], 00:15:15.155 "product_name": "Raid Volume", 00:15:15.155 "block_size": 512, 00:15:15.155 "num_blocks": 126976, 00:15:15.155 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:15.155 "assigned_rate_limits": { 00:15:15.155 "rw_ios_per_sec": 0, 00:15:15.155 "rw_mbytes_per_sec": 0, 00:15:15.155 "r_mbytes_per_sec": 0, 00:15:15.155 "w_mbytes_per_sec": 0 00:15:15.155 }, 00:15:15.155 "claimed": false, 00:15:15.155 "zoned": false, 00:15:15.155 "supported_io_types": { 00:15:15.155 "read": true, 00:15:15.155 "write": true, 00:15:15.155 "unmap": false, 00:15:15.155 "flush": false, 00:15:15.155 "reset": true, 00:15:15.155 "nvme_admin": false, 00:15:15.155 "nvme_io": false, 00:15:15.155 "nvme_io_md": false, 00:15:15.155 "write_zeroes": true, 00:15:15.155 "zcopy": false, 00:15:15.155 "get_zone_info": false, 00:15:15.155 "zone_management": false, 00:15:15.155 "zone_append": false, 00:15:15.155 "compare": false, 00:15:15.155 "compare_and_write": false, 00:15:15.155 "abort": false, 00:15:15.155 "seek_hole": false, 00:15:15.155 "seek_data": false, 00:15:15.155 "copy": false, 00:15:15.155 "nvme_iov_md": false 00:15:15.155 }, 00:15:15.155 "driver_specific": { 00:15:15.155 "raid": { 00:15:15.155 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:15.155 "strip_size_kb": 64, 00:15:15.155 "state": "online", 00:15:15.155 "raid_level": "raid5f", 00:15:15.155 "superblock": true, 00:15:15.155 "num_base_bdevs": 3, 00:15:15.155 "num_base_bdevs_discovered": 3, 00:15:15.156 "num_base_bdevs_operational": 3, 00:15:15.156 "base_bdevs_list": [ 00:15:15.156 { 00:15:15.156 "name": "pt1", 00:15:15.156 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.156 "is_configured": true, 00:15:15.156 "data_offset": 2048, 00:15:15.156 "data_size": 63488 00:15:15.156 }, 00:15:15.156 { 00:15:15.156 "name": "pt2", 00:15:15.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.156 "is_configured": true, 00:15:15.156 "data_offset": 2048, 00:15:15.156 "data_size": 63488 00:15:15.156 }, 00:15:15.156 { 00:15:15.156 "name": "pt3", 00:15:15.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.156 "is_configured": true, 00:15:15.156 "data_offset": 2048, 00:15:15.156 "data_size": 63488 00:15:15.156 } 00:15:15.156 ] 00:15:15.156 } 00:15:15.156 } 00:15:15.156 }' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:15.156 pt2 00:15:15.156 pt3' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:15.156 [2024-12-14 12:41:14.844167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=87def66e-07fb-468a-9d40-6ce81c8360c8 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 87def66e-07fb-468a-9d40-6ce81c8360c8 ']' 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.156 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 [2024-12-14 12:41:14.891899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.416 [2024-12-14 12:41:14.891928] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:15.416 [2024-12-14 12:41:14.892000] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.416 [2024-12-14 12:41:14.892088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.416 [2024-12-14 12:41:14.892098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.416 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:15.417 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:15.417 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.417 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.417 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.417 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:15.417 12:41:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:15.417 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.417 12:41:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.417 [2024-12-14 12:41:15.043719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:15.417 [2024-12-14 12:41:15.045529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:15.417 [2024-12-14 12:41:15.045587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:15.417 [2024-12-14 12:41:15.045636] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:15.417 [2024-12-14 12:41:15.045680] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:15.417 [2024-12-14 12:41:15.045699] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:15.417 [2024-12-14 12:41:15.045715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.417 [2024-12-14 12:41:15.045724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:15.417 request: 00:15:15.417 { 00:15:15.417 "name": "raid_bdev1", 00:15:15.417 "raid_level": "raid5f", 00:15:15.417 "base_bdevs": [ 00:15:15.417 "malloc1", 00:15:15.417 "malloc2", 00:15:15.417 "malloc3" 00:15:15.417 ], 00:15:15.417 "strip_size_kb": 64, 00:15:15.417 "superblock": false, 00:15:15.417 "method": "bdev_raid_create", 00:15:15.417 "req_id": 1 00:15:15.417 } 00:15:15.417 Got JSON-RPC error response 00:15:15.417 response: 00:15:15.417 { 00:15:15.417 "code": -17, 00:15:15.417 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:15.417 } 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.417 [2024-12-14 12:41:15.107520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:15.417 [2024-12-14 12:41:15.107566] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.417 [2024-12-14 12:41:15.107583] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:15.417 [2024-12-14 12:41:15.107607] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.417 [2024-12-14 12:41:15.109759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.417 [2024-12-14 12:41:15.109793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:15.417 [2024-12-14 12:41:15.109863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:15.417 [2024-12-14 12:41:15.109922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:15.417 pt1 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.417 "name": "raid_bdev1", 00:15:15.417 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:15.417 "strip_size_kb": 64, 00:15:15.417 "state": "configuring", 00:15:15.417 "raid_level": "raid5f", 00:15:15.417 "superblock": true, 00:15:15.417 "num_base_bdevs": 3, 00:15:15.417 "num_base_bdevs_discovered": 1, 00:15:15.417 "num_base_bdevs_operational": 3, 00:15:15.417 "base_bdevs_list": [ 00:15:15.417 { 00:15:15.417 "name": "pt1", 00:15:15.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.417 "is_configured": true, 00:15:15.417 "data_offset": 2048, 00:15:15.417 "data_size": 63488 00:15:15.417 }, 00:15:15.417 { 00:15:15.417 "name": null, 00:15:15.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.417 "is_configured": false, 00:15:15.417 "data_offset": 2048, 00:15:15.417 "data_size": 63488 00:15:15.417 }, 00:15:15.417 { 00:15:15.417 "name": null, 00:15:15.417 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.417 "is_configured": false, 00:15:15.417 "data_offset": 2048, 00:15:15.417 "data_size": 63488 00:15:15.417 } 00:15:15.417 ] 00:15:15.417 }' 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.417 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.985 [2024-12-14 12:41:15.530853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:15.985 [2024-12-14 12:41:15.530918] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.985 [2024-12-14 12:41:15.530941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:15.985 [2024-12-14 12:41:15.530950] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.985 [2024-12-14 12:41:15.531437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.985 [2024-12-14 12:41:15.531472] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:15.985 [2024-12-14 12:41:15.531566] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:15.985 [2024-12-14 12:41:15.531600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:15.985 pt2 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.985 [2024-12-14 12:41:15.542821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:15.985 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.986 "name": "raid_bdev1", 00:15:15.986 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:15.986 "strip_size_kb": 64, 00:15:15.986 "state": "configuring", 00:15:15.986 "raid_level": "raid5f", 00:15:15.986 "superblock": true, 00:15:15.986 "num_base_bdevs": 3, 00:15:15.986 "num_base_bdevs_discovered": 1, 00:15:15.986 "num_base_bdevs_operational": 3, 00:15:15.986 "base_bdevs_list": [ 00:15:15.986 { 00:15:15.986 "name": "pt1", 00:15:15.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:15.986 "is_configured": true, 00:15:15.986 "data_offset": 2048, 00:15:15.986 "data_size": 63488 00:15:15.986 }, 00:15:15.986 { 00:15:15.986 "name": null, 00:15:15.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:15.986 "is_configured": false, 00:15:15.986 "data_offset": 0, 00:15:15.986 "data_size": 63488 00:15:15.986 }, 00:15:15.986 { 00:15:15.986 "name": null, 00:15:15.986 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:15.986 "is_configured": false, 00:15:15.986 "data_offset": 2048, 00:15:15.986 "data_size": 63488 00:15:15.986 } 00:15:15.986 ] 00:15:15.986 }' 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.986 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.245 [2024-12-14 12:41:15.914185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:16.245 [2024-12-14 12:41:15.914253] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.245 [2024-12-14 12:41:15.914271] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:16.245 [2024-12-14 12:41:15.914281] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.245 [2024-12-14 12:41:15.914760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.245 [2024-12-14 12:41:15.914791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:16.245 [2024-12-14 12:41:15.914876] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:16.245 [2024-12-14 12:41:15.914901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:16.245 pt2 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:16.245 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.246 [2024-12-14 12:41:15.926152] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:16.246 [2024-12-14 12:41:15.926199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.246 [2024-12-14 12:41:15.926213] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:16.246 [2024-12-14 12:41:15.926222] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.246 [2024-12-14 12:41:15.926606] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.246 [2024-12-14 12:41:15.926635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:16.246 [2024-12-14 12:41:15.926700] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:16.246 [2024-12-14 12:41:15.926720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:16.246 [2024-12-14 12:41:15.926840] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:16.246 [2024-12-14 12:41:15.926860] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.246 [2024-12-14 12:41:15.927110] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:16.246 [2024-12-14 12:41:15.932279] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:16.246 [2024-12-14 12:41:15.932301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:16.246 [2024-12-14 12:41:15.932477] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.246 pt3 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.246 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.506 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.506 "name": "raid_bdev1", 00:15:16.506 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:16.506 "strip_size_kb": 64, 00:15:16.506 "state": "online", 00:15:16.506 "raid_level": "raid5f", 00:15:16.506 "superblock": true, 00:15:16.506 "num_base_bdevs": 3, 00:15:16.506 "num_base_bdevs_discovered": 3, 00:15:16.506 "num_base_bdevs_operational": 3, 00:15:16.506 "base_bdevs_list": [ 00:15:16.506 { 00:15:16.506 "name": "pt1", 00:15:16.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.506 "is_configured": true, 00:15:16.506 "data_offset": 2048, 00:15:16.506 "data_size": 63488 00:15:16.506 }, 00:15:16.506 { 00:15:16.506 "name": "pt2", 00:15:16.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.506 "is_configured": true, 00:15:16.506 "data_offset": 2048, 00:15:16.506 "data_size": 63488 00:15:16.506 }, 00:15:16.506 { 00:15:16.506 "name": "pt3", 00:15:16.506 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.506 "is_configured": true, 00:15:16.506 "data_offset": 2048, 00:15:16.506 "data_size": 63488 00:15:16.506 } 00:15:16.506 ] 00:15:16.506 }' 00:15:16.506 12:41:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.506 12:41:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.765 [2024-12-14 12:41:16.350669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.765 "name": "raid_bdev1", 00:15:16.765 "aliases": [ 00:15:16.765 "87def66e-07fb-468a-9d40-6ce81c8360c8" 00:15:16.765 ], 00:15:16.765 "product_name": "Raid Volume", 00:15:16.765 "block_size": 512, 00:15:16.765 "num_blocks": 126976, 00:15:16.765 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:16.765 "assigned_rate_limits": { 00:15:16.765 "rw_ios_per_sec": 0, 00:15:16.765 "rw_mbytes_per_sec": 0, 00:15:16.765 "r_mbytes_per_sec": 0, 00:15:16.765 "w_mbytes_per_sec": 0 00:15:16.765 }, 00:15:16.765 "claimed": false, 00:15:16.765 "zoned": false, 00:15:16.765 "supported_io_types": { 00:15:16.765 "read": true, 00:15:16.765 "write": true, 00:15:16.765 "unmap": false, 00:15:16.765 "flush": false, 00:15:16.765 "reset": true, 00:15:16.765 "nvme_admin": false, 00:15:16.765 "nvme_io": false, 00:15:16.765 "nvme_io_md": false, 00:15:16.765 "write_zeroes": true, 00:15:16.765 "zcopy": false, 00:15:16.765 "get_zone_info": false, 00:15:16.765 "zone_management": false, 00:15:16.765 "zone_append": false, 00:15:16.765 "compare": false, 00:15:16.765 "compare_and_write": false, 00:15:16.765 "abort": false, 00:15:16.765 "seek_hole": false, 00:15:16.765 "seek_data": false, 00:15:16.765 "copy": false, 00:15:16.765 "nvme_iov_md": false 00:15:16.765 }, 00:15:16.765 "driver_specific": { 00:15:16.765 "raid": { 00:15:16.765 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:16.765 "strip_size_kb": 64, 00:15:16.765 "state": "online", 00:15:16.765 "raid_level": "raid5f", 00:15:16.765 "superblock": true, 00:15:16.765 "num_base_bdevs": 3, 00:15:16.765 "num_base_bdevs_discovered": 3, 00:15:16.765 "num_base_bdevs_operational": 3, 00:15:16.765 "base_bdevs_list": [ 00:15:16.765 { 00:15:16.765 "name": "pt1", 00:15:16.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.765 "is_configured": true, 00:15:16.765 "data_offset": 2048, 00:15:16.765 "data_size": 63488 00:15:16.765 }, 00:15:16.765 { 00:15:16.765 "name": "pt2", 00:15:16.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.765 "is_configured": true, 00:15:16.765 "data_offset": 2048, 00:15:16.765 "data_size": 63488 00:15:16.765 }, 00:15:16.765 { 00:15:16.765 "name": "pt3", 00:15:16.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.765 "is_configured": true, 00:15:16.765 "data_offset": 2048, 00:15:16.765 "data_size": 63488 00:15:16.765 } 00:15:16.765 ] 00:15:16.765 } 00:15:16.765 } 00:15:16.765 }' 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:16.765 pt2 00:15:16.765 pt3' 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:16.765 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:16.766 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.766 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.025 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.026 [2024-12-14 12:41:16.650086] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 87def66e-07fb-468a-9d40-6ce81c8360c8 '!=' 87def66e-07fb-468a-9d40-6ce81c8360c8 ']' 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.026 [2024-12-14 12:41:16.677900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.026 "name": "raid_bdev1", 00:15:17.026 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:17.026 "strip_size_kb": 64, 00:15:17.026 "state": "online", 00:15:17.026 "raid_level": "raid5f", 00:15:17.026 "superblock": true, 00:15:17.026 "num_base_bdevs": 3, 00:15:17.026 "num_base_bdevs_discovered": 2, 00:15:17.026 "num_base_bdevs_operational": 2, 00:15:17.026 "base_bdevs_list": [ 00:15:17.026 { 00:15:17.026 "name": null, 00:15:17.026 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.026 "is_configured": false, 00:15:17.026 "data_offset": 0, 00:15:17.026 "data_size": 63488 00:15:17.026 }, 00:15:17.026 { 00:15:17.026 "name": "pt2", 00:15:17.026 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.026 "is_configured": true, 00:15:17.026 "data_offset": 2048, 00:15:17.026 "data_size": 63488 00:15:17.026 }, 00:15:17.026 { 00:15:17.026 "name": "pt3", 00:15:17.026 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.026 "is_configured": true, 00:15:17.026 "data_offset": 2048, 00:15:17.026 "data_size": 63488 00:15:17.026 } 00:15:17.026 ] 00:15:17.026 }' 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.026 12:41:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.596 [2024-12-14 12:41:17.157021] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.596 [2024-12-14 12:41:17.157063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.596 [2024-12-14 12:41:17.157141] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.596 [2024-12-14 12:41:17.157197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.596 [2024-12-14 12:41:17.157210] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.596 [2024-12-14 12:41:17.240842] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:17.596 [2024-12-14 12:41:17.240894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.596 [2024-12-14 12:41:17.240926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:17.596 [2024-12-14 12:41:17.240936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.596 [2024-12-14 12:41:17.243136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.596 [2024-12-14 12:41:17.243176] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:17.596 [2024-12-14 12:41:17.243250] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:17.596 [2024-12-14 12:41:17.243295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:17.596 pt2 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.596 "name": "raid_bdev1", 00:15:17.596 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:17.596 "strip_size_kb": 64, 00:15:17.596 "state": "configuring", 00:15:17.596 "raid_level": "raid5f", 00:15:17.596 "superblock": true, 00:15:17.596 "num_base_bdevs": 3, 00:15:17.596 "num_base_bdevs_discovered": 1, 00:15:17.596 "num_base_bdevs_operational": 2, 00:15:17.596 "base_bdevs_list": [ 00:15:17.596 { 00:15:17.596 "name": null, 00:15:17.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.596 "is_configured": false, 00:15:17.596 "data_offset": 2048, 00:15:17.596 "data_size": 63488 00:15:17.596 }, 00:15:17.596 { 00:15:17.596 "name": "pt2", 00:15:17.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.596 "is_configured": true, 00:15:17.596 "data_offset": 2048, 00:15:17.596 "data_size": 63488 00:15:17.596 }, 00:15:17.596 { 00:15:17.596 "name": null, 00:15:17.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.596 "is_configured": false, 00:15:17.596 "data_offset": 2048, 00:15:17.596 "data_size": 63488 00:15:17.596 } 00:15:17.596 ] 00:15:17.596 }' 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.596 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.166 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:18.166 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:18.166 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:18.166 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:18.166 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.166 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.166 [2024-12-14 12:41:17.696110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:18.166 [2024-12-14 12:41:17.696180] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.166 [2024-12-14 12:41:17.696224] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:18.166 [2024-12-14 12:41:17.696238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.166 [2024-12-14 12:41:17.696720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.166 [2024-12-14 12:41:17.696750] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:18.166 [2024-12-14 12:41:17.696835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:18.166 [2024-12-14 12:41:17.696872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:18.166 [2024-12-14 12:41:17.696999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:18.166 [2024-12-14 12:41:17.697016] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:18.166 [2024-12-14 12:41:17.697278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:18.167 [2024-12-14 12:41:17.702886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:18.167 [2024-12-14 12:41:17.702910] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:18.167 [2024-12-14 12:41:17.703272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.167 pt3 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.167 "name": "raid_bdev1", 00:15:18.167 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:18.167 "strip_size_kb": 64, 00:15:18.167 "state": "online", 00:15:18.167 "raid_level": "raid5f", 00:15:18.167 "superblock": true, 00:15:18.167 "num_base_bdevs": 3, 00:15:18.167 "num_base_bdevs_discovered": 2, 00:15:18.167 "num_base_bdevs_operational": 2, 00:15:18.167 "base_bdevs_list": [ 00:15:18.167 { 00:15:18.167 "name": null, 00:15:18.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.167 "is_configured": false, 00:15:18.167 "data_offset": 2048, 00:15:18.167 "data_size": 63488 00:15:18.167 }, 00:15:18.167 { 00:15:18.167 "name": "pt2", 00:15:18.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.167 "is_configured": true, 00:15:18.167 "data_offset": 2048, 00:15:18.167 "data_size": 63488 00:15:18.167 }, 00:15:18.167 { 00:15:18.167 "name": "pt3", 00:15:18.167 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.167 "is_configured": true, 00:15:18.167 "data_offset": 2048, 00:15:18.167 "data_size": 63488 00:15:18.167 } 00:15:18.167 ] 00:15:18.167 }' 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.167 12:41:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.427 [2024-12-14 12:41:18.122411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.427 [2024-12-14 12:41:18.122453] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.427 [2024-12-14 12:41:18.122536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.427 [2024-12-14 12:41:18.122606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:18.427 [2024-12-14 12:41:18.122622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.427 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.687 [2024-12-14 12:41:18.178316] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:18.687 [2024-12-14 12:41:18.178372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.687 [2024-12-14 12:41:18.178391] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:18.687 [2024-12-14 12:41:18.178400] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.687 [2024-12-14 12:41:18.180804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.687 [2024-12-14 12:41:18.180840] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:18.687 [2024-12-14 12:41:18.180914] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:18.687 [2024-12-14 12:41:18.180958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:18.687 [2024-12-14 12:41:18.181121] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:18.687 [2024-12-14 12:41:18.181157] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:18.687 [2024-12-14 12:41:18.181175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:18.687 [2024-12-14 12:41:18.181239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.687 pt1 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.687 "name": "raid_bdev1", 00:15:18.687 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:18.687 "strip_size_kb": 64, 00:15:18.687 "state": "configuring", 00:15:18.687 "raid_level": "raid5f", 00:15:18.687 "superblock": true, 00:15:18.687 "num_base_bdevs": 3, 00:15:18.687 "num_base_bdevs_discovered": 1, 00:15:18.687 "num_base_bdevs_operational": 2, 00:15:18.687 "base_bdevs_list": [ 00:15:18.687 { 00:15:18.687 "name": null, 00:15:18.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.687 "is_configured": false, 00:15:18.687 "data_offset": 2048, 00:15:18.687 "data_size": 63488 00:15:18.687 }, 00:15:18.687 { 00:15:18.687 "name": "pt2", 00:15:18.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.687 "is_configured": true, 00:15:18.687 "data_offset": 2048, 00:15:18.687 "data_size": 63488 00:15:18.687 }, 00:15:18.687 { 00:15:18.687 "name": null, 00:15:18.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.687 "is_configured": false, 00:15:18.687 "data_offset": 2048, 00:15:18.687 "data_size": 63488 00:15:18.687 } 00:15:18.687 ] 00:15:18.687 }' 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.687 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.947 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.948 [2024-12-14 12:41:18.641532] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:18.948 [2024-12-14 12:41:18.641593] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.948 [2024-12-14 12:41:18.641614] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:18.948 [2024-12-14 12:41:18.641623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.948 [2024-12-14 12:41:18.642153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.948 [2024-12-14 12:41:18.642181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:18.948 [2024-12-14 12:41:18.642266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:18.948 [2024-12-14 12:41:18.642297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:18.948 [2024-12-14 12:41:18.642424] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:18.948 [2024-12-14 12:41:18.642441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:18.948 [2024-12-14 12:41:18.642718] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:18.948 [2024-12-14 12:41:18.648496] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:18.948 [2024-12-14 12:41:18.648526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:18.948 [2024-12-14 12:41:18.648772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.948 pt3 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.948 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.208 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.208 "name": "raid_bdev1", 00:15:19.208 "uuid": "87def66e-07fb-468a-9d40-6ce81c8360c8", 00:15:19.208 "strip_size_kb": 64, 00:15:19.208 "state": "online", 00:15:19.208 "raid_level": "raid5f", 00:15:19.208 "superblock": true, 00:15:19.208 "num_base_bdevs": 3, 00:15:19.208 "num_base_bdevs_discovered": 2, 00:15:19.208 "num_base_bdevs_operational": 2, 00:15:19.208 "base_bdevs_list": [ 00:15:19.208 { 00:15:19.208 "name": null, 00:15:19.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.208 "is_configured": false, 00:15:19.208 "data_offset": 2048, 00:15:19.208 "data_size": 63488 00:15:19.208 }, 00:15:19.208 { 00:15:19.208 "name": "pt2", 00:15:19.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.208 "is_configured": true, 00:15:19.208 "data_offset": 2048, 00:15:19.208 "data_size": 63488 00:15:19.208 }, 00:15:19.208 { 00:15:19.208 "name": "pt3", 00:15:19.208 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.208 "is_configured": true, 00:15:19.208 "data_offset": 2048, 00:15:19.208 "data_size": 63488 00:15:19.208 } 00:15:19.208 ] 00:15:19.208 }' 00:15:19.208 12:41:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.208 12:41:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:19.468 [2024-12-14 12:41:19.159131] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 87def66e-07fb-468a-9d40-6ce81c8360c8 '!=' 87def66e-07fb-468a-9d40-6ce81c8360c8 ']' 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 82879 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 82879 ']' 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 82879 00:15:19.468 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:19.727 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:19.727 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82879 00:15:19.727 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:19.727 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:19.727 killing process with pid 82879 00:15:19.727 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82879' 00:15:19.727 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 82879 00:15:19.727 [2024-12-14 12:41:19.243258] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:19.727 [2024-12-14 12:41:19.243369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.727 12:41:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 82879 00:15:19.727 [2024-12-14 12:41:19.243444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.727 [2024-12-14 12:41:19.243458] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:19.987 [2024-12-14 12:41:19.533450] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:20.926 12:41:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:20.926 00:15:20.926 real 0m7.519s 00:15:20.926 user 0m11.754s 00:15:20.926 sys 0m1.334s 00:15:20.926 12:41:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:20.926 12:41:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.926 ************************************ 00:15:20.926 END TEST raid5f_superblock_test 00:15:20.926 ************************************ 00:15:21.186 12:41:20 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:21.186 12:41:20 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:21.186 12:41:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:21.186 12:41:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.186 12:41:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:21.186 ************************************ 00:15:21.186 START TEST raid5f_rebuild_test 00:15:21.186 ************************************ 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:21.186 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=83323 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 83323 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 83323 ']' 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:21.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:21.187 12:41:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:21.187 Zero copy mechanism will not be used. 00:15:21.187 [2024-12-14 12:41:20.774084] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:21.187 [2024-12-14 12:41:20.774210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83323 ] 00:15:21.447 [2024-12-14 12:41:20.943791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.447 [2024-12-14 12:41:21.052880] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.706 [2024-12-14 12:41:21.238740] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.706 [2024-12-14 12:41:21.238791] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.966 BaseBdev1_malloc 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.966 [2024-12-14 12:41:21.638069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:21.966 [2024-12-14 12:41:21.638121] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.966 [2024-12-14 12:41:21.638141] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:21.966 [2024-12-14 12:41:21.638152] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.966 [2024-12-14 12:41:21.640200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.966 [2024-12-14 12:41:21.640238] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:21.966 BaseBdev1 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.966 BaseBdev2_malloc 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.966 [2024-12-14 12:41:21.689820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:21.966 [2024-12-14 12:41:21.689880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.966 [2024-12-14 12:41:21.689913] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:21.966 [2024-12-14 12:41:21.689923] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.966 [2024-12-14 12:41:21.691975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.966 [2024-12-14 12:41:21.692022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:21.966 BaseBdev2 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.966 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.226 BaseBdev3_malloc 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.226 [2024-12-14 12:41:21.757159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:22.226 [2024-12-14 12:41:21.757224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.226 [2024-12-14 12:41:21.757245] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:22.226 [2024-12-14 12:41:21.757254] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.226 [2024-12-14 12:41:21.759240] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.226 [2024-12-14 12:41:21.759275] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:22.226 BaseBdev3 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.226 spare_malloc 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.226 spare_delay 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.226 [2024-12-14 12:41:21.822079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:22.226 [2024-12-14 12:41:21.822144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:22.226 [2024-12-14 12:41:21.822161] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:22.226 [2024-12-14 12:41:21.822171] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:22.226 [2024-12-14 12:41:21.824211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:22.226 [2024-12-14 12:41:21.824250] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:22.226 spare 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.226 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.227 [2024-12-14 12:41:21.834123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.227 [2024-12-14 12:41:21.835884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:22.227 [2024-12-14 12:41:21.835974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:22.227 [2024-12-14 12:41:21.836052] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:22.227 [2024-12-14 12:41:21.836062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:22.227 [2024-12-14 12:41:21.836319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:22.227 [2024-12-14 12:41:21.841837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:22.227 [2024-12-14 12:41:21.841860] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:22.227 [2024-12-14 12:41:21.842041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.227 "name": "raid_bdev1", 00:15:22.227 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:22.227 "strip_size_kb": 64, 00:15:22.227 "state": "online", 00:15:22.227 "raid_level": "raid5f", 00:15:22.227 "superblock": false, 00:15:22.227 "num_base_bdevs": 3, 00:15:22.227 "num_base_bdevs_discovered": 3, 00:15:22.227 "num_base_bdevs_operational": 3, 00:15:22.227 "base_bdevs_list": [ 00:15:22.227 { 00:15:22.227 "name": "BaseBdev1", 00:15:22.227 "uuid": "ed4ecb9f-91d7-59d0-a48c-f3b15a468665", 00:15:22.227 "is_configured": true, 00:15:22.227 "data_offset": 0, 00:15:22.227 "data_size": 65536 00:15:22.227 }, 00:15:22.227 { 00:15:22.227 "name": "BaseBdev2", 00:15:22.227 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:22.227 "is_configured": true, 00:15:22.227 "data_offset": 0, 00:15:22.227 "data_size": 65536 00:15:22.227 }, 00:15:22.227 { 00:15:22.227 "name": "BaseBdev3", 00:15:22.227 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:22.227 "is_configured": true, 00:15:22.227 "data_offset": 0, 00:15:22.227 "data_size": 65536 00:15:22.227 } 00:15:22.227 ] 00:15:22.227 }' 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.227 12:41:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.796 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:22.796 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.796 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:22.796 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.796 [2024-12-14 12:41:22.244127] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.796 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:22.797 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:22.797 [2024-12-14 12:41:22.519482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:23.057 /dev/nbd0 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:23.057 1+0 records in 00:15:23.057 1+0 records out 00:15:23.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457188 s, 9.0 MB/s 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:23.057 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:23.317 512+0 records in 00:15:23.317 512+0 records out 00:15:23.317 67108864 bytes (67 MB, 64 MiB) copied, 0.374561 s, 179 MB/s 00:15:23.317 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:23.317 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:23.317 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:23.317 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:23.317 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:23.317 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:23.317 12:41:22 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:23.577 [2024-12-14 12:41:23.190337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.577 [2024-12-14 12:41:23.206127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.577 "name": "raid_bdev1", 00:15:23.577 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:23.577 "strip_size_kb": 64, 00:15:23.577 "state": "online", 00:15:23.577 "raid_level": "raid5f", 00:15:23.577 "superblock": false, 00:15:23.577 "num_base_bdevs": 3, 00:15:23.577 "num_base_bdevs_discovered": 2, 00:15:23.577 "num_base_bdevs_operational": 2, 00:15:23.577 "base_bdevs_list": [ 00:15:23.577 { 00:15:23.577 "name": null, 00:15:23.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.577 "is_configured": false, 00:15:23.577 "data_offset": 0, 00:15:23.577 "data_size": 65536 00:15:23.577 }, 00:15:23.577 { 00:15:23.577 "name": "BaseBdev2", 00:15:23.577 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:23.577 "is_configured": true, 00:15:23.577 "data_offset": 0, 00:15:23.577 "data_size": 65536 00:15:23.577 }, 00:15:23.577 { 00:15:23.577 "name": "BaseBdev3", 00:15:23.577 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:23.577 "is_configured": true, 00:15:23.577 "data_offset": 0, 00:15:23.577 "data_size": 65536 00:15:23.577 } 00:15:23.577 ] 00:15:23.577 }' 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.577 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.147 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.147 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.147 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.147 [2024-12-14 12:41:23.681315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.147 [2024-12-14 12:41:23.698006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:24.147 12:41:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.147 12:41:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:24.147 [2024-12-14 12:41:23.706638] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.086 "name": "raid_bdev1", 00:15:25.086 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:25.086 "strip_size_kb": 64, 00:15:25.086 "state": "online", 00:15:25.086 "raid_level": "raid5f", 00:15:25.086 "superblock": false, 00:15:25.086 "num_base_bdevs": 3, 00:15:25.086 "num_base_bdevs_discovered": 3, 00:15:25.086 "num_base_bdevs_operational": 3, 00:15:25.086 "process": { 00:15:25.086 "type": "rebuild", 00:15:25.086 "target": "spare", 00:15:25.086 "progress": { 00:15:25.086 "blocks": 20480, 00:15:25.086 "percent": 15 00:15:25.086 } 00:15:25.086 }, 00:15:25.086 "base_bdevs_list": [ 00:15:25.086 { 00:15:25.086 "name": "spare", 00:15:25.086 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:25.086 "is_configured": true, 00:15:25.086 "data_offset": 0, 00:15:25.086 "data_size": 65536 00:15:25.086 }, 00:15:25.086 { 00:15:25.086 "name": "BaseBdev2", 00:15:25.086 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:25.086 "is_configured": true, 00:15:25.086 "data_offset": 0, 00:15:25.086 "data_size": 65536 00:15:25.086 }, 00:15:25.086 { 00:15:25.086 "name": "BaseBdev3", 00:15:25.086 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:25.086 "is_configured": true, 00:15:25.086 "data_offset": 0, 00:15:25.086 "data_size": 65536 00:15:25.086 } 00:15:25.086 ] 00:15:25.086 }' 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.086 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.346 [2024-12-14 12:41:24.833482] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.346 [2024-12-14 12:41:24.915360] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:25.346 [2024-12-14 12:41:24.915436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.346 [2024-12-14 12:41:24.915453] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:25.346 [2024-12-14 12:41:24.915462] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.346 12:41:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.346 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.346 "name": "raid_bdev1", 00:15:25.346 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:25.346 "strip_size_kb": 64, 00:15:25.346 "state": "online", 00:15:25.346 "raid_level": "raid5f", 00:15:25.346 "superblock": false, 00:15:25.346 "num_base_bdevs": 3, 00:15:25.346 "num_base_bdevs_discovered": 2, 00:15:25.346 "num_base_bdevs_operational": 2, 00:15:25.346 "base_bdevs_list": [ 00:15:25.346 { 00:15:25.346 "name": null, 00:15:25.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.346 "is_configured": false, 00:15:25.346 "data_offset": 0, 00:15:25.346 "data_size": 65536 00:15:25.346 }, 00:15:25.346 { 00:15:25.346 "name": "BaseBdev2", 00:15:25.346 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:25.346 "is_configured": true, 00:15:25.346 "data_offset": 0, 00:15:25.346 "data_size": 65536 00:15:25.346 }, 00:15:25.346 { 00:15:25.346 "name": "BaseBdev3", 00:15:25.346 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:25.346 "is_configured": true, 00:15:25.346 "data_offset": 0, 00:15:25.346 "data_size": 65536 00:15:25.346 } 00:15:25.346 ] 00:15:25.346 }' 00:15:25.346 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.346 12:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.017 "name": "raid_bdev1", 00:15:26.017 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:26.017 "strip_size_kb": 64, 00:15:26.017 "state": "online", 00:15:26.017 "raid_level": "raid5f", 00:15:26.017 "superblock": false, 00:15:26.017 "num_base_bdevs": 3, 00:15:26.017 "num_base_bdevs_discovered": 2, 00:15:26.017 "num_base_bdevs_operational": 2, 00:15:26.017 "base_bdevs_list": [ 00:15:26.017 { 00:15:26.017 "name": null, 00:15:26.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.017 "is_configured": false, 00:15:26.017 "data_offset": 0, 00:15:26.017 "data_size": 65536 00:15:26.017 }, 00:15:26.017 { 00:15:26.017 "name": "BaseBdev2", 00:15:26.017 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:26.017 "is_configured": true, 00:15:26.017 "data_offset": 0, 00:15:26.017 "data_size": 65536 00:15:26.017 }, 00:15:26.017 { 00:15:26.017 "name": "BaseBdev3", 00:15:26.017 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:26.017 "is_configured": true, 00:15:26.017 "data_offset": 0, 00:15:26.017 "data_size": 65536 00:15:26.017 } 00:15:26.017 ] 00:15:26.017 }' 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.017 [2024-12-14 12:41:25.572729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.017 [2024-12-14 12:41:25.589286] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.017 12:41:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:26.017 [2024-12-14 12:41:25.597042] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.955 "name": "raid_bdev1", 00:15:26.955 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:26.955 "strip_size_kb": 64, 00:15:26.955 "state": "online", 00:15:26.955 "raid_level": "raid5f", 00:15:26.955 "superblock": false, 00:15:26.955 "num_base_bdevs": 3, 00:15:26.955 "num_base_bdevs_discovered": 3, 00:15:26.955 "num_base_bdevs_operational": 3, 00:15:26.955 "process": { 00:15:26.955 "type": "rebuild", 00:15:26.955 "target": "spare", 00:15:26.955 "progress": { 00:15:26.955 "blocks": 20480, 00:15:26.955 "percent": 15 00:15:26.955 } 00:15:26.955 }, 00:15:26.955 "base_bdevs_list": [ 00:15:26.955 { 00:15:26.955 "name": "spare", 00:15:26.955 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:26.955 "is_configured": true, 00:15:26.955 "data_offset": 0, 00:15:26.955 "data_size": 65536 00:15:26.955 }, 00:15:26.955 { 00:15:26.955 "name": "BaseBdev2", 00:15:26.955 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:26.955 "is_configured": true, 00:15:26.955 "data_offset": 0, 00:15:26.955 "data_size": 65536 00:15:26.955 }, 00:15:26.955 { 00:15:26.955 "name": "BaseBdev3", 00:15:26.955 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:26.955 "is_configured": true, 00:15:26.955 "data_offset": 0, 00:15:26.955 "data_size": 65536 00:15:26.955 } 00:15:26.955 ] 00:15:26.955 }' 00:15:26.955 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=541 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.215 "name": "raid_bdev1", 00:15:27.215 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:27.215 "strip_size_kb": 64, 00:15:27.215 "state": "online", 00:15:27.215 "raid_level": "raid5f", 00:15:27.215 "superblock": false, 00:15:27.215 "num_base_bdevs": 3, 00:15:27.215 "num_base_bdevs_discovered": 3, 00:15:27.215 "num_base_bdevs_operational": 3, 00:15:27.215 "process": { 00:15:27.215 "type": "rebuild", 00:15:27.215 "target": "spare", 00:15:27.215 "progress": { 00:15:27.215 "blocks": 22528, 00:15:27.215 "percent": 17 00:15:27.215 } 00:15:27.215 }, 00:15:27.215 "base_bdevs_list": [ 00:15:27.215 { 00:15:27.215 "name": "spare", 00:15:27.215 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:27.215 "is_configured": true, 00:15:27.215 "data_offset": 0, 00:15:27.215 "data_size": 65536 00:15:27.215 }, 00:15:27.215 { 00:15:27.215 "name": "BaseBdev2", 00:15:27.215 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:27.215 "is_configured": true, 00:15:27.215 "data_offset": 0, 00:15:27.215 "data_size": 65536 00:15:27.215 }, 00:15:27.215 { 00:15:27.215 "name": "BaseBdev3", 00:15:27.215 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:27.215 "is_configured": true, 00:15:27.215 "data_offset": 0, 00:15:27.215 "data_size": 65536 00:15:27.215 } 00:15:27.215 ] 00:15:27.215 }' 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.215 12:41:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:28.155 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:28.155 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.155 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:28.155 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:28.155 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:28.155 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:28.415 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.415 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.415 12:41:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.415 12:41:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.415 12:41:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.415 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:28.415 "name": "raid_bdev1", 00:15:28.415 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:28.415 "strip_size_kb": 64, 00:15:28.415 "state": "online", 00:15:28.415 "raid_level": "raid5f", 00:15:28.415 "superblock": false, 00:15:28.415 "num_base_bdevs": 3, 00:15:28.415 "num_base_bdevs_discovered": 3, 00:15:28.415 "num_base_bdevs_operational": 3, 00:15:28.415 "process": { 00:15:28.415 "type": "rebuild", 00:15:28.415 "target": "spare", 00:15:28.415 "progress": { 00:15:28.415 "blocks": 45056, 00:15:28.415 "percent": 34 00:15:28.415 } 00:15:28.415 }, 00:15:28.415 "base_bdevs_list": [ 00:15:28.415 { 00:15:28.415 "name": "spare", 00:15:28.415 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:28.415 "is_configured": true, 00:15:28.415 "data_offset": 0, 00:15:28.415 "data_size": 65536 00:15:28.415 }, 00:15:28.415 { 00:15:28.415 "name": "BaseBdev2", 00:15:28.415 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:28.415 "is_configured": true, 00:15:28.415 "data_offset": 0, 00:15:28.415 "data_size": 65536 00:15:28.415 }, 00:15:28.415 { 00:15:28.415 "name": "BaseBdev3", 00:15:28.415 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:28.415 "is_configured": true, 00:15:28.415 "data_offset": 0, 00:15:28.415 "data_size": 65536 00:15:28.415 } 00:15:28.415 ] 00:15:28.415 }' 00:15:28.415 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:28.416 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.416 12:41:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.416 12:41:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.416 12:41:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.353 "name": "raid_bdev1", 00:15:29.353 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:29.353 "strip_size_kb": 64, 00:15:29.353 "state": "online", 00:15:29.353 "raid_level": "raid5f", 00:15:29.353 "superblock": false, 00:15:29.353 "num_base_bdevs": 3, 00:15:29.353 "num_base_bdevs_discovered": 3, 00:15:29.353 "num_base_bdevs_operational": 3, 00:15:29.353 "process": { 00:15:29.353 "type": "rebuild", 00:15:29.353 "target": "spare", 00:15:29.353 "progress": { 00:15:29.353 "blocks": 69632, 00:15:29.353 "percent": 53 00:15:29.353 } 00:15:29.353 }, 00:15:29.353 "base_bdevs_list": [ 00:15:29.353 { 00:15:29.353 "name": "spare", 00:15:29.353 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:29.353 "is_configured": true, 00:15:29.353 "data_offset": 0, 00:15:29.353 "data_size": 65536 00:15:29.353 }, 00:15:29.353 { 00:15:29.353 "name": "BaseBdev2", 00:15:29.353 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:29.353 "is_configured": true, 00:15:29.353 "data_offset": 0, 00:15:29.353 "data_size": 65536 00:15:29.353 }, 00:15:29.353 { 00:15:29.353 "name": "BaseBdev3", 00:15:29.353 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:29.353 "is_configured": true, 00:15:29.353 "data_offset": 0, 00:15:29.353 "data_size": 65536 00:15:29.353 } 00:15:29.353 ] 00:15:29.353 }' 00:15:29.353 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.613 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.613 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.613 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.613 12:41:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.552 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.552 "name": "raid_bdev1", 00:15:30.552 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:30.552 "strip_size_kb": 64, 00:15:30.552 "state": "online", 00:15:30.552 "raid_level": "raid5f", 00:15:30.552 "superblock": false, 00:15:30.552 "num_base_bdevs": 3, 00:15:30.552 "num_base_bdevs_discovered": 3, 00:15:30.552 "num_base_bdevs_operational": 3, 00:15:30.552 "process": { 00:15:30.552 "type": "rebuild", 00:15:30.552 "target": "spare", 00:15:30.552 "progress": { 00:15:30.552 "blocks": 92160, 00:15:30.552 "percent": 70 00:15:30.552 } 00:15:30.552 }, 00:15:30.552 "base_bdevs_list": [ 00:15:30.552 { 00:15:30.552 "name": "spare", 00:15:30.552 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:30.552 "is_configured": true, 00:15:30.552 "data_offset": 0, 00:15:30.552 "data_size": 65536 00:15:30.552 }, 00:15:30.552 { 00:15:30.552 "name": "BaseBdev2", 00:15:30.552 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:30.552 "is_configured": true, 00:15:30.552 "data_offset": 0, 00:15:30.552 "data_size": 65536 00:15:30.552 }, 00:15:30.552 { 00:15:30.552 "name": "BaseBdev3", 00:15:30.552 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:30.552 "is_configured": true, 00:15:30.553 "data_offset": 0, 00:15:30.553 "data_size": 65536 00:15:30.553 } 00:15:30.553 ] 00:15:30.553 }' 00:15:30.553 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.553 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.553 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.811 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.812 12:41:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.749 "name": "raid_bdev1", 00:15:31.749 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:31.749 "strip_size_kb": 64, 00:15:31.749 "state": "online", 00:15:31.749 "raid_level": "raid5f", 00:15:31.749 "superblock": false, 00:15:31.749 "num_base_bdevs": 3, 00:15:31.749 "num_base_bdevs_discovered": 3, 00:15:31.749 "num_base_bdevs_operational": 3, 00:15:31.749 "process": { 00:15:31.749 "type": "rebuild", 00:15:31.749 "target": "spare", 00:15:31.749 "progress": { 00:15:31.749 "blocks": 114688, 00:15:31.749 "percent": 87 00:15:31.749 } 00:15:31.749 }, 00:15:31.749 "base_bdevs_list": [ 00:15:31.749 { 00:15:31.749 "name": "spare", 00:15:31.749 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:31.749 "is_configured": true, 00:15:31.749 "data_offset": 0, 00:15:31.749 "data_size": 65536 00:15:31.749 }, 00:15:31.749 { 00:15:31.749 "name": "BaseBdev2", 00:15:31.749 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:31.749 "is_configured": true, 00:15:31.749 "data_offset": 0, 00:15:31.749 "data_size": 65536 00:15:31.749 }, 00:15:31.749 { 00:15:31.749 "name": "BaseBdev3", 00:15:31.749 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:31.749 "is_configured": true, 00:15:31.749 "data_offset": 0, 00:15:31.749 "data_size": 65536 00:15:31.749 } 00:15:31.749 ] 00:15:31.749 }' 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.749 12:41:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.317 [2024-12-14 12:41:32.042114] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:32.317 [2024-12-14 12:41:32.042197] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:32.317 [2024-12-14 12:41:32.042236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.886 "name": "raid_bdev1", 00:15:32.886 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:32.886 "strip_size_kb": 64, 00:15:32.886 "state": "online", 00:15:32.886 "raid_level": "raid5f", 00:15:32.886 "superblock": false, 00:15:32.886 "num_base_bdevs": 3, 00:15:32.886 "num_base_bdevs_discovered": 3, 00:15:32.886 "num_base_bdevs_operational": 3, 00:15:32.886 "base_bdevs_list": [ 00:15:32.886 { 00:15:32.886 "name": "spare", 00:15:32.886 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:32.886 "is_configured": true, 00:15:32.886 "data_offset": 0, 00:15:32.886 "data_size": 65536 00:15:32.886 }, 00:15:32.886 { 00:15:32.886 "name": "BaseBdev2", 00:15:32.886 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:32.886 "is_configured": true, 00:15:32.886 "data_offset": 0, 00:15:32.886 "data_size": 65536 00:15:32.886 }, 00:15:32.886 { 00:15:32.886 "name": "BaseBdev3", 00:15:32.886 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:32.886 "is_configured": true, 00:15:32.886 "data_offset": 0, 00:15:32.886 "data_size": 65536 00:15:32.886 } 00:15:32.886 ] 00:15:32.886 }' 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.886 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.145 "name": "raid_bdev1", 00:15:33.145 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:33.145 "strip_size_kb": 64, 00:15:33.145 "state": "online", 00:15:33.145 "raid_level": "raid5f", 00:15:33.145 "superblock": false, 00:15:33.145 "num_base_bdevs": 3, 00:15:33.145 "num_base_bdevs_discovered": 3, 00:15:33.145 "num_base_bdevs_operational": 3, 00:15:33.145 "base_bdevs_list": [ 00:15:33.145 { 00:15:33.145 "name": "spare", 00:15:33.145 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:33.145 "is_configured": true, 00:15:33.145 "data_offset": 0, 00:15:33.145 "data_size": 65536 00:15:33.145 }, 00:15:33.145 { 00:15:33.145 "name": "BaseBdev2", 00:15:33.145 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:33.145 "is_configured": true, 00:15:33.145 "data_offset": 0, 00:15:33.145 "data_size": 65536 00:15:33.145 }, 00:15:33.145 { 00:15:33.145 "name": "BaseBdev3", 00:15:33.145 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:33.145 "is_configured": true, 00:15:33.145 "data_offset": 0, 00:15:33.145 "data_size": 65536 00:15:33.145 } 00:15:33.145 ] 00:15:33.145 }' 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:33.145 "name": "raid_bdev1", 00:15:33.145 "uuid": "83c6cfd1-7f05-45e9-a029-d5dd4ab0bd2e", 00:15:33.145 "strip_size_kb": 64, 00:15:33.145 "state": "online", 00:15:33.145 "raid_level": "raid5f", 00:15:33.145 "superblock": false, 00:15:33.145 "num_base_bdevs": 3, 00:15:33.145 "num_base_bdevs_discovered": 3, 00:15:33.145 "num_base_bdevs_operational": 3, 00:15:33.145 "base_bdevs_list": [ 00:15:33.145 { 00:15:33.145 "name": "spare", 00:15:33.145 "uuid": "6f615e83-e36c-50ae-8d08-27e8e8b6fbd5", 00:15:33.145 "is_configured": true, 00:15:33.145 "data_offset": 0, 00:15:33.145 "data_size": 65536 00:15:33.145 }, 00:15:33.145 { 00:15:33.145 "name": "BaseBdev2", 00:15:33.145 "uuid": "7074a272-76cd-5fb9-bdcc-b84e92abe139", 00:15:33.145 "is_configured": true, 00:15:33.145 "data_offset": 0, 00:15:33.145 "data_size": 65536 00:15:33.145 }, 00:15:33.145 { 00:15:33.145 "name": "BaseBdev3", 00:15:33.145 "uuid": "82c27e10-aeca-583b-b26e-14f62aa3a798", 00:15:33.145 "is_configured": true, 00:15:33.145 "data_offset": 0, 00:15:33.145 "data_size": 65536 00:15:33.145 } 00:15:33.145 ] 00:15:33.145 }' 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:33.145 12:41:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.714 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:33.714 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.714 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.714 [2024-12-14 12:41:33.224003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:33.715 [2024-12-14 12:41:33.224036] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.715 [2024-12-14 12:41:33.224136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.715 [2024-12-14 12:41:33.224217] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.715 [2024-12-14 12:41:33.224238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:33.715 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:33.974 /dev/nbd0 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.975 1+0 records in 00:15:33.975 1+0 records out 00:15:33.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000208579 s, 19.6 MB/s 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:33.975 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:34.234 /dev/nbd1 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:34.234 1+0 records in 00:15:34.234 1+0 records out 00:15:34.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439643 s, 9.3 MB/s 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.234 12:41:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.494 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 83323 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 83323 ']' 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 83323 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83323 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:34.754 killing process with pid 83323 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83323' 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 83323 00:15:34.754 Received shutdown signal, test time was about 60.000000 seconds 00:15:34.754 00:15:34.754 Latency(us) 00:15:34.754 [2024-12-14T12:41:34.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.754 [2024-12-14T12:41:34.492Z] =================================================================================================================== 00:15:34.754 [2024-12-14T12:41:34.492Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:34.754 [2024-12-14 12:41:34.416186] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:34.754 12:41:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 83323 00:15:35.324 [2024-12-14 12:41:34.798284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:36.262 00:15:36.262 real 0m15.186s 00:15:36.262 user 0m18.727s 00:15:36.262 sys 0m1.925s 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.262 ************************************ 00:15:36.262 END TEST raid5f_rebuild_test 00:15:36.262 ************************************ 00:15:36.262 12:41:35 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:36.262 12:41:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:36.262 12:41:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.262 12:41:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:36.262 ************************************ 00:15:36.262 START TEST raid5f_rebuild_test_sb 00:15:36.262 ************************************ 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=83757 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 83757 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83757 ']' 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:36.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:36.262 12:41:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:36.522 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:36.522 Zero copy mechanism will not be used. 00:15:36.522 [2024-12-14 12:41:36.033387] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:36.522 [2024-12-14 12:41:36.033498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83757 ] 00:15:36.522 [2024-12-14 12:41:36.190197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.782 [2024-12-14 12:41:36.298209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.782 [2024-12-14 12:41:36.483926] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:36.782 [2024-12-14 12:41:36.483988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.352 BaseBdev1_malloc 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.352 [2024-12-14 12:41:36.901796] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:37.352 [2024-12-14 12:41:36.901859] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.352 [2024-12-14 12:41:36.901881] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:37.352 [2024-12-14 12:41:36.901892] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.352 [2024-12-14 12:41:36.904030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.352 [2024-12-14 12:41:36.904075] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:37.352 BaseBdev1 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.352 BaseBdev2_malloc 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.352 [2024-12-14 12:41:36.953593] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:37.352 [2024-12-14 12:41:36.953652] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.352 [2024-12-14 12:41:36.953671] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:37.352 [2024-12-14 12:41:36.953682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.352 [2024-12-14 12:41:36.955700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.352 [2024-12-14 12:41:36.955736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:37.352 BaseBdev2 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.352 BaseBdev3_malloc 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.352 [2024-12-14 12:41:37.021744] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:37.352 [2024-12-14 12:41:37.021793] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.352 [2024-12-14 12:41:37.021812] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:37.352 [2024-12-14 12:41:37.021823] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.352 [2024-12-14 12:41:37.023829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.352 [2024-12-14 12:41:37.023866] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:37.352 BaseBdev3 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.352 spare_malloc 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.352 spare_delay 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.352 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.612 [2024-12-14 12:41:37.087582] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:37.612 [2024-12-14 12:41:37.087632] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.612 [2024-12-14 12:41:37.087653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:37.612 [2024-12-14 12:41:37.087664] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.612 [2024-12-14 12:41:37.089855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.612 [2024-12-14 12:41:37.089899] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:37.612 spare 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.612 [2024-12-14 12:41:37.099630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:37.612 [2024-12-14 12:41:37.101425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:37.612 [2024-12-14 12:41:37.101496] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:37.612 [2024-12-14 12:41:37.101674] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:37.612 [2024-12-14 12:41:37.101694] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:37.612 [2024-12-14 12:41:37.101936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:37.612 [2024-12-14 12:41:37.107609] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:37.612 [2024-12-14 12:41:37.107637] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:37.612 [2024-12-14 12:41:37.107825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.612 "name": "raid_bdev1", 00:15:37.612 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:37.612 "strip_size_kb": 64, 00:15:37.612 "state": "online", 00:15:37.612 "raid_level": "raid5f", 00:15:37.612 "superblock": true, 00:15:37.612 "num_base_bdevs": 3, 00:15:37.612 "num_base_bdevs_discovered": 3, 00:15:37.612 "num_base_bdevs_operational": 3, 00:15:37.612 "base_bdevs_list": [ 00:15:37.612 { 00:15:37.612 "name": "BaseBdev1", 00:15:37.612 "uuid": "4888c7b7-fc5d-58e1-9230-7bd5492410e4", 00:15:37.612 "is_configured": true, 00:15:37.612 "data_offset": 2048, 00:15:37.612 "data_size": 63488 00:15:37.612 }, 00:15:37.612 { 00:15:37.612 "name": "BaseBdev2", 00:15:37.612 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:37.612 "is_configured": true, 00:15:37.612 "data_offset": 2048, 00:15:37.612 "data_size": 63488 00:15:37.612 }, 00:15:37.612 { 00:15:37.612 "name": "BaseBdev3", 00:15:37.612 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:37.612 "is_configured": true, 00:15:37.612 "data_offset": 2048, 00:15:37.612 "data_size": 63488 00:15:37.612 } 00:15:37.612 ] 00:15:37.612 }' 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.612 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.872 [2024-12-14 12:41:37.561565] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:37.872 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:38.133 [2024-12-14 12:41:37.813001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:38.133 /dev/nbd0 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:38.133 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:38.133 1+0 records in 00:15:38.133 1+0 records out 00:15:38.133 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349202 s, 11.7 MB/s 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:38.393 12:41:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:38.652 496+0 records in 00:15:38.652 496+0 records out 00:15:38.652 65011712 bytes (65 MB, 62 MiB) copied, 0.350893 s, 185 MB/s 00:15:38.652 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:38.652 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:38.652 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:38.652 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.653 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:38.653 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.653 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:38.912 [2024-12-14 12:41:38.443608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.912 [2024-12-14 12:41:38.459507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.912 "name": "raid_bdev1", 00:15:38.912 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:38.912 "strip_size_kb": 64, 00:15:38.912 "state": "online", 00:15:38.912 "raid_level": "raid5f", 00:15:38.912 "superblock": true, 00:15:38.912 "num_base_bdevs": 3, 00:15:38.912 "num_base_bdevs_discovered": 2, 00:15:38.912 "num_base_bdevs_operational": 2, 00:15:38.912 "base_bdevs_list": [ 00:15:38.912 { 00:15:38.912 "name": null, 00:15:38.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.912 "is_configured": false, 00:15:38.912 "data_offset": 0, 00:15:38.912 "data_size": 63488 00:15:38.912 }, 00:15:38.912 { 00:15:38.912 "name": "BaseBdev2", 00:15:38.912 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:38.912 "is_configured": true, 00:15:38.912 "data_offset": 2048, 00:15:38.912 "data_size": 63488 00:15:38.912 }, 00:15:38.912 { 00:15:38.912 "name": "BaseBdev3", 00:15:38.912 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:38.912 "is_configured": true, 00:15:38.912 "data_offset": 2048, 00:15:38.912 "data_size": 63488 00:15:38.912 } 00:15:38.912 ] 00:15:38.912 }' 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.912 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.172 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:39.172 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.172 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.172 [2024-12-14 12:41:38.874807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.172 [2024-12-14 12:41:38.891284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:39.172 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.172 12:41:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:39.172 [2024-12-14 12:41:38.898747] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.551 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.552 "name": "raid_bdev1", 00:15:40.552 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:40.552 "strip_size_kb": 64, 00:15:40.552 "state": "online", 00:15:40.552 "raid_level": "raid5f", 00:15:40.552 "superblock": true, 00:15:40.552 "num_base_bdevs": 3, 00:15:40.552 "num_base_bdevs_discovered": 3, 00:15:40.552 "num_base_bdevs_operational": 3, 00:15:40.552 "process": { 00:15:40.552 "type": "rebuild", 00:15:40.552 "target": "spare", 00:15:40.552 "progress": { 00:15:40.552 "blocks": 20480, 00:15:40.552 "percent": 16 00:15:40.552 } 00:15:40.552 }, 00:15:40.552 "base_bdevs_list": [ 00:15:40.552 { 00:15:40.552 "name": "spare", 00:15:40.552 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:40.552 "is_configured": true, 00:15:40.552 "data_offset": 2048, 00:15:40.552 "data_size": 63488 00:15:40.552 }, 00:15:40.552 { 00:15:40.552 "name": "BaseBdev2", 00:15:40.552 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:40.552 "is_configured": true, 00:15:40.552 "data_offset": 2048, 00:15:40.552 "data_size": 63488 00:15:40.552 }, 00:15:40.552 { 00:15:40.552 "name": "BaseBdev3", 00:15:40.552 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:40.552 "is_configured": true, 00:15:40.552 "data_offset": 2048, 00:15:40.552 "data_size": 63488 00:15:40.552 } 00:15:40.552 ] 00:15:40.552 }' 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.552 12:41:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.552 [2024-12-14 12:41:40.029684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.552 [2024-12-14 12:41:40.107266] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:40.552 [2024-12-14 12:41:40.107321] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.552 [2024-12-14 12:41:40.107338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:40.552 [2024-12-14 12:41:40.107346] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.552 "name": "raid_bdev1", 00:15:40.552 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:40.552 "strip_size_kb": 64, 00:15:40.552 "state": "online", 00:15:40.552 "raid_level": "raid5f", 00:15:40.552 "superblock": true, 00:15:40.552 "num_base_bdevs": 3, 00:15:40.552 "num_base_bdevs_discovered": 2, 00:15:40.552 "num_base_bdevs_operational": 2, 00:15:40.552 "base_bdevs_list": [ 00:15:40.552 { 00:15:40.552 "name": null, 00:15:40.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.552 "is_configured": false, 00:15:40.552 "data_offset": 0, 00:15:40.552 "data_size": 63488 00:15:40.552 }, 00:15:40.552 { 00:15:40.552 "name": "BaseBdev2", 00:15:40.552 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:40.552 "is_configured": true, 00:15:40.552 "data_offset": 2048, 00:15:40.552 "data_size": 63488 00:15:40.552 }, 00:15:40.552 { 00:15:40.552 "name": "BaseBdev3", 00:15:40.552 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:40.552 "is_configured": true, 00:15:40.552 "data_offset": 2048, 00:15:40.552 "data_size": 63488 00:15:40.552 } 00:15:40.552 ] 00:15:40.552 }' 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.552 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.123 "name": "raid_bdev1", 00:15:41.123 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:41.123 "strip_size_kb": 64, 00:15:41.123 "state": "online", 00:15:41.123 "raid_level": "raid5f", 00:15:41.123 "superblock": true, 00:15:41.123 "num_base_bdevs": 3, 00:15:41.123 "num_base_bdevs_discovered": 2, 00:15:41.123 "num_base_bdevs_operational": 2, 00:15:41.123 "base_bdevs_list": [ 00:15:41.123 { 00:15:41.123 "name": null, 00:15:41.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.123 "is_configured": false, 00:15:41.123 "data_offset": 0, 00:15:41.123 "data_size": 63488 00:15:41.123 }, 00:15:41.123 { 00:15:41.123 "name": "BaseBdev2", 00:15:41.123 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:41.123 "is_configured": true, 00:15:41.123 "data_offset": 2048, 00:15:41.123 "data_size": 63488 00:15:41.123 }, 00:15:41.123 { 00:15:41.123 "name": "BaseBdev3", 00:15:41.123 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:41.123 "is_configured": true, 00:15:41.123 "data_offset": 2048, 00:15:41.123 "data_size": 63488 00:15:41.123 } 00:15:41.123 ] 00:15:41.123 }' 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.123 [2024-12-14 12:41:40.778871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.123 [2024-12-14 12:41:40.794077] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.123 12:41:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:41.123 [2024-12-14 12:41:40.801422] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.103 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.363 "name": "raid_bdev1", 00:15:42.363 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:42.363 "strip_size_kb": 64, 00:15:42.363 "state": "online", 00:15:42.363 "raid_level": "raid5f", 00:15:42.363 "superblock": true, 00:15:42.363 "num_base_bdevs": 3, 00:15:42.363 "num_base_bdevs_discovered": 3, 00:15:42.363 "num_base_bdevs_operational": 3, 00:15:42.363 "process": { 00:15:42.363 "type": "rebuild", 00:15:42.363 "target": "spare", 00:15:42.363 "progress": { 00:15:42.363 "blocks": 20480, 00:15:42.363 "percent": 16 00:15:42.363 } 00:15:42.363 }, 00:15:42.363 "base_bdevs_list": [ 00:15:42.363 { 00:15:42.363 "name": "spare", 00:15:42.363 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:42.363 "is_configured": true, 00:15:42.363 "data_offset": 2048, 00:15:42.363 "data_size": 63488 00:15:42.363 }, 00:15:42.363 { 00:15:42.363 "name": "BaseBdev2", 00:15:42.363 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:42.363 "is_configured": true, 00:15:42.363 "data_offset": 2048, 00:15:42.363 "data_size": 63488 00:15:42.363 }, 00:15:42.363 { 00:15:42.363 "name": "BaseBdev3", 00:15:42.363 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:42.363 "is_configured": true, 00:15:42.363 "data_offset": 2048, 00:15:42.363 "data_size": 63488 00:15:42.363 } 00:15:42.363 ] 00:15:42.363 }' 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:42.363 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=556 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.363 12:41:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.363 12:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.363 "name": "raid_bdev1", 00:15:42.363 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:42.363 "strip_size_kb": 64, 00:15:42.363 "state": "online", 00:15:42.363 "raid_level": "raid5f", 00:15:42.363 "superblock": true, 00:15:42.363 "num_base_bdevs": 3, 00:15:42.363 "num_base_bdevs_discovered": 3, 00:15:42.363 "num_base_bdevs_operational": 3, 00:15:42.363 "process": { 00:15:42.363 "type": "rebuild", 00:15:42.363 "target": "spare", 00:15:42.363 "progress": { 00:15:42.363 "blocks": 22528, 00:15:42.363 "percent": 17 00:15:42.363 } 00:15:42.363 }, 00:15:42.363 "base_bdevs_list": [ 00:15:42.363 { 00:15:42.363 "name": "spare", 00:15:42.363 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:42.363 "is_configured": true, 00:15:42.363 "data_offset": 2048, 00:15:42.363 "data_size": 63488 00:15:42.363 }, 00:15:42.363 { 00:15:42.363 "name": "BaseBdev2", 00:15:42.363 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:42.363 "is_configured": true, 00:15:42.363 "data_offset": 2048, 00:15:42.363 "data_size": 63488 00:15:42.363 }, 00:15:42.363 { 00:15:42.363 "name": "BaseBdev3", 00:15:42.363 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:42.363 "is_configured": true, 00:15:42.363 "data_offset": 2048, 00:15:42.363 "data_size": 63488 00:15:42.363 } 00:15:42.363 ] 00:15:42.363 }' 00:15:42.363 12:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.363 12:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.363 12:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.623 12:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.623 12:41:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.562 "name": "raid_bdev1", 00:15:43.562 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:43.562 "strip_size_kb": 64, 00:15:43.562 "state": "online", 00:15:43.562 "raid_level": "raid5f", 00:15:43.562 "superblock": true, 00:15:43.562 "num_base_bdevs": 3, 00:15:43.562 "num_base_bdevs_discovered": 3, 00:15:43.562 "num_base_bdevs_operational": 3, 00:15:43.562 "process": { 00:15:43.562 "type": "rebuild", 00:15:43.562 "target": "spare", 00:15:43.562 "progress": { 00:15:43.562 "blocks": 47104, 00:15:43.562 "percent": 37 00:15:43.562 } 00:15:43.562 }, 00:15:43.562 "base_bdevs_list": [ 00:15:43.562 { 00:15:43.562 "name": "spare", 00:15:43.562 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:43.562 "is_configured": true, 00:15:43.562 "data_offset": 2048, 00:15:43.562 "data_size": 63488 00:15:43.562 }, 00:15:43.562 { 00:15:43.562 "name": "BaseBdev2", 00:15:43.562 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:43.562 "is_configured": true, 00:15:43.562 "data_offset": 2048, 00:15:43.562 "data_size": 63488 00:15:43.562 }, 00:15:43.562 { 00:15:43.562 "name": "BaseBdev3", 00:15:43.562 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:43.562 "is_configured": true, 00:15:43.562 "data_offset": 2048, 00:15:43.562 "data_size": 63488 00:15:43.562 } 00:15:43.562 ] 00:15:43.562 }' 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.562 12:41:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.943 "name": "raid_bdev1", 00:15:44.943 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:44.943 "strip_size_kb": 64, 00:15:44.943 "state": "online", 00:15:44.943 "raid_level": "raid5f", 00:15:44.943 "superblock": true, 00:15:44.943 "num_base_bdevs": 3, 00:15:44.943 "num_base_bdevs_discovered": 3, 00:15:44.943 "num_base_bdevs_operational": 3, 00:15:44.943 "process": { 00:15:44.943 "type": "rebuild", 00:15:44.943 "target": "spare", 00:15:44.943 "progress": { 00:15:44.943 "blocks": 69632, 00:15:44.943 "percent": 54 00:15:44.943 } 00:15:44.943 }, 00:15:44.943 "base_bdevs_list": [ 00:15:44.943 { 00:15:44.943 "name": "spare", 00:15:44.943 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:44.943 "is_configured": true, 00:15:44.943 "data_offset": 2048, 00:15:44.943 "data_size": 63488 00:15:44.943 }, 00:15:44.943 { 00:15:44.943 "name": "BaseBdev2", 00:15:44.943 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:44.943 "is_configured": true, 00:15:44.943 "data_offset": 2048, 00:15:44.943 "data_size": 63488 00:15:44.943 }, 00:15:44.943 { 00:15:44.943 "name": "BaseBdev3", 00:15:44.943 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:44.943 "is_configured": true, 00:15:44.943 "data_offset": 2048, 00:15:44.943 "data_size": 63488 00:15:44.943 } 00:15:44.943 ] 00:15:44.943 }' 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.943 12:41:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.880 "name": "raid_bdev1", 00:15:45.880 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:45.880 "strip_size_kb": 64, 00:15:45.880 "state": "online", 00:15:45.880 "raid_level": "raid5f", 00:15:45.880 "superblock": true, 00:15:45.880 "num_base_bdevs": 3, 00:15:45.880 "num_base_bdevs_discovered": 3, 00:15:45.880 "num_base_bdevs_operational": 3, 00:15:45.880 "process": { 00:15:45.880 "type": "rebuild", 00:15:45.880 "target": "spare", 00:15:45.880 "progress": { 00:15:45.880 "blocks": 92160, 00:15:45.880 "percent": 72 00:15:45.880 } 00:15:45.880 }, 00:15:45.880 "base_bdevs_list": [ 00:15:45.880 { 00:15:45.880 "name": "spare", 00:15:45.880 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:45.880 "is_configured": true, 00:15:45.880 "data_offset": 2048, 00:15:45.880 "data_size": 63488 00:15:45.880 }, 00:15:45.880 { 00:15:45.880 "name": "BaseBdev2", 00:15:45.880 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:45.880 "is_configured": true, 00:15:45.880 "data_offset": 2048, 00:15:45.880 "data_size": 63488 00:15:45.880 }, 00:15:45.880 { 00:15:45.880 "name": "BaseBdev3", 00:15:45.880 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:45.880 "is_configured": true, 00:15:45.880 "data_offset": 2048, 00:15:45.880 "data_size": 63488 00:15:45.880 } 00:15:45.880 ] 00:15:45.880 }' 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.880 12:41:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.256 "name": "raid_bdev1", 00:15:47.256 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:47.256 "strip_size_kb": 64, 00:15:47.256 "state": "online", 00:15:47.256 "raid_level": "raid5f", 00:15:47.256 "superblock": true, 00:15:47.256 "num_base_bdevs": 3, 00:15:47.256 "num_base_bdevs_discovered": 3, 00:15:47.256 "num_base_bdevs_operational": 3, 00:15:47.256 "process": { 00:15:47.256 "type": "rebuild", 00:15:47.256 "target": "spare", 00:15:47.256 "progress": { 00:15:47.256 "blocks": 116736, 00:15:47.256 "percent": 91 00:15:47.256 } 00:15:47.256 }, 00:15:47.256 "base_bdevs_list": [ 00:15:47.256 { 00:15:47.256 "name": "spare", 00:15:47.256 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:47.256 "is_configured": true, 00:15:47.256 "data_offset": 2048, 00:15:47.256 "data_size": 63488 00:15:47.256 }, 00:15:47.256 { 00:15:47.256 "name": "BaseBdev2", 00:15:47.256 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:47.256 "is_configured": true, 00:15:47.256 "data_offset": 2048, 00:15:47.256 "data_size": 63488 00:15:47.256 }, 00:15:47.256 { 00:15:47.256 "name": "BaseBdev3", 00:15:47.256 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:47.256 "is_configured": true, 00:15:47.256 "data_offset": 2048, 00:15:47.256 "data_size": 63488 00:15:47.256 } 00:15:47.256 ] 00:15:47.256 }' 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.256 12:41:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.515 [2024-12-14 12:41:47.045394] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:47.516 [2024-12-14 12:41:47.045468] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:47.516 [2024-12-14 12:41:47.045599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.087 "name": "raid_bdev1", 00:15:48.087 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:48.087 "strip_size_kb": 64, 00:15:48.087 "state": "online", 00:15:48.087 "raid_level": "raid5f", 00:15:48.087 "superblock": true, 00:15:48.087 "num_base_bdevs": 3, 00:15:48.087 "num_base_bdevs_discovered": 3, 00:15:48.087 "num_base_bdevs_operational": 3, 00:15:48.087 "base_bdevs_list": [ 00:15:48.087 { 00:15:48.087 "name": "spare", 00:15:48.087 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:48.087 "is_configured": true, 00:15:48.087 "data_offset": 2048, 00:15:48.087 "data_size": 63488 00:15:48.087 }, 00:15:48.087 { 00:15:48.087 "name": "BaseBdev2", 00:15:48.087 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:48.087 "is_configured": true, 00:15:48.087 "data_offset": 2048, 00:15:48.087 "data_size": 63488 00:15:48.087 }, 00:15:48.087 { 00:15:48.087 "name": "BaseBdev3", 00:15:48.087 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:48.087 "is_configured": true, 00:15:48.087 "data_offset": 2048, 00:15:48.087 "data_size": 63488 00:15:48.087 } 00:15:48.087 ] 00:15:48.087 }' 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:48.087 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.345 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.345 "name": "raid_bdev1", 00:15:48.345 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:48.345 "strip_size_kb": 64, 00:15:48.345 "state": "online", 00:15:48.345 "raid_level": "raid5f", 00:15:48.345 "superblock": true, 00:15:48.345 "num_base_bdevs": 3, 00:15:48.345 "num_base_bdevs_discovered": 3, 00:15:48.345 "num_base_bdevs_operational": 3, 00:15:48.345 "base_bdevs_list": [ 00:15:48.345 { 00:15:48.345 "name": "spare", 00:15:48.345 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:48.345 "is_configured": true, 00:15:48.345 "data_offset": 2048, 00:15:48.345 "data_size": 63488 00:15:48.345 }, 00:15:48.345 { 00:15:48.345 "name": "BaseBdev2", 00:15:48.346 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:48.346 "is_configured": true, 00:15:48.346 "data_offset": 2048, 00:15:48.346 "data_size": 63488 00:15:48.346 }, 00:15:48.346 { 00:15:48.346 "name": "BaseBdev3", 00:15:48.346 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:48.346 "is_configured": true, 00:15:48.346 "data_offset": 2048, 00:15:48.346 "data_size": 63488 00:15:48.346 } 00:15:48.346 ] 00:15:48.346 }' 00:15:48.346 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.346 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.346 12:41:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.346 "name": "raid_bdev1", 00:15:48.346 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:48.346 "strip_size_kb": 64, 00:15:48.346 "state": "online", 00:15:48.346 "raid_level": "raid5f", 00:15:48.346 "superblock": true, 00:15:48.346 "num_base_bdevs": 3, 00:15:48.346 "num_base_bdevs_discovered": 3, 00:15:48.346 "num_base_bdevs_operational": 3, 00:15:48.346 "base_bdevs_list": [ 00:15:48.346 { 00:15:48.346 "name": "spare", 00:15:48.346 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:48.346 "is_configured": true, 00:15:48.346 "data_offset": 2048, 00:15:48.346 "data_size": 63488 00:15:48.346 }, 00:15:48.346 { 00:15:48.346 "name": "BaseBdev2", 00:15:48.346 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:48.346 "is_configured": true, 00:15:48.346 "data_offset": 2048, 00:15:48.346 "data_size": 63488 00:15:48.346 }, 00:15:48.346 { 00:15:48.346 "name": "BaseBdev3", 00:15:48.346 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:48.346 "is_configured": true, 00:15:48.346 "data_offset": 2048, 00:15:48.346 "data_size": 63488 00:15:48.346 } 00:15:48.346 ] 00:15:48.346 }' 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.346 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.913 [2024-12-14 12:41:48.510026] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.913 [2024-12-14 12:41:48.510117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.913 [2024-12-14 12:41:48.510228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.913 [2024-12-14 12:41:48.510344] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.913 [2024-12-14 12:41:48.510425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:48.913 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:49.172 /dev/nbd0 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.172 1+0 records in 00:15:49.172 1+0 records out 00:15:49.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452895 s, 9.0 MB/s 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:49.172 12:41:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:49.431 /dev/nbd1 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.431 1+0 records in 00:15:49.431 1+0 records out 00:15:49.431 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248942 s, 16.5 MB/s 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:49.431 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:49.690 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:49.690 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:49.690 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:49.690 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:49.690 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:49.691 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.691 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:49.950 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:49.951 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:49.951 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:49.951 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:49.951 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.951 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 [2024-12-14 12:41:49.694405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:50.211 [2024-12-14 12:41:49.694480] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:50.211 [2024-12-14 12:41:49.694520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:50.211 [2024-12-14 12:41:49.694533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:50.211 [2024-12-14 12:41:49.696795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:50.211 [2024-12-14 12:41:49.696885] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:50.211 [2024-12-14 12:41:49.696982] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:50.211 [2024-12-14 12:41:49.697051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:50.211 [2024-12-14 12:41:49.697206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:50.211 [2024-12-14 12:41:49.697303] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:50.211 spare 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 [2024-12-14 12:41:49.797203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:50.211 [2024-12-14 12:41:49.797281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:50.211 [2024-12-14 12:41:49.797600] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:50.211 [2024-12-14 12:41:49.802829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:50.211 [2024-12-14 12:41:49.802850] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:50.211 [2024-12-14 12:41:49.803030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.211 "name": "raid_bdev1", 00:15:50.211 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:50.211 "strip_size_kb": 64, 00:15:50.211 "state": "online", 00:15:50.211 "raid_level": "raid5f", 00:15:50.211 "superblock": true, 00:15:50.211 "num_base_bdevs": 3, 00:15:50.211 "num_base_bdevs_discovered": 3, 00:15:50.211 "num_base_bdevs_operational": 3, 00:15:50.211 "base_bdevs_list": [ 00:15:50.211 { 00:15:50.211 "name": "spare", 00:15:50.211 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:50.211 "is_configured": true, 00:15:50.211 "data_offset": 2048, 00:15:50.211 "data_size": 63488 00:15:50.211 }, 00:15:50.211 { 00:15:50.211 "name": "BaseBdev2", 00:15:50.211 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:50.211 "is_configured": true, 00:15:50.211 "data_offset": 2048, 00:15:50.211 "data_size": 63488 00:15:50.211 }, 00:15:50.211 { 00:15:50.211 "name": "BaseBdev3", 00:15:50.211 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:50.211 "is_configured": true, 00:15:50.211 "data_offset": 2048, 00:15:50.211 "data_size": 63488 00:15:50.211 } 00:15:50.211 ] 00:15:50.211 }' 00:15:50.211 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.212 12:41:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.781 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.782 "name": "raid_bdev1", 00:15:50.782 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:50.782 "strip_size_kb": 64, 00:15:50.782 "state": "online", 00:15:50.782 "raid_level": "raid5f", 00:15:50.782 "superblock": true, 00:15:50.782 "num_base_bdevs": 3, 00:15:50.782 "num_base_bdevs_discovered": 3, 00:15:50.782 "num_base_bdevs_operational": 3, 00:15:50.782 "base_bdevs_list": [ 00:15:50.782 { 00:15:50.782 "name": "spare", 00:15:50.782 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:50.782 "is_configured": true, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 }, 00:15:50.782 { 00:15:50.782 "name": "BaseBdev2", 00:15:50.782 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:50.782 "is_configured": true, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 }, 00:15:50.782 { 00:15:50.782 "name": "BaseBdev3", 00:15:50.782 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:50.782 "is_configured": true, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 } 00:15:50.782 ] 00:15:50.782 }' 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.782 [2024-12-14 12:41:50.432512] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.782 "name": "raid_bdev1", 00:15:50.782 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:50.782 "strip_size_kb": 64, 00:15:50.782 "state": "online", 00:15:50.782 "raid_level": "raid5f", 00:15:50.782 "superblock": true, 00:15:50.782 "num_base_bdevs": 3, 00:15:50.782 "num_base_bdevs_discovered": 2, 00:15:50.782 "num_base_bdevs_operational": 2, 00:15:50.782 "base_bdevs_list": [ 00:15:50.782 { 00:15:50.782 "name": null, 00:15:50.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.782 "is_configured": false, 00:15:50.782 "data_offset": 0, 00:15:50.782 "data_size": 63488 00:15:50.782 }, 00:15:50.782 { 00:15:50.782 "name": "BaseBdev2", 00:15:50.782 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:50.782 "is_configured": true, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 }, 00:15:50.782 { 00:15:50.782 "name": "BaseBdev3", 00:15:50.782 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:50.782 "is_configured": true, 00:15:50.782 "data_offset": 2048, 00:15:50.782 "data_size": 63488 00:15:50.782 } 00:15:50.782 ] 00:15:50.782 }' 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.782 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.350 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:51.350 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.350 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.350 [2024-12-14 12:41:50.883762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.350 [2024-12-14 12:41:50.884037] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:51.350 [2024-12-14 12:41:50.884129] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:51.350 [2024-12-14 12:41:50.884195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.350 [2024-12-14 12:41:50.899763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:51.350 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.350 12:41:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:51.350 [2024-12-14 12:41:50.906989] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.290 "name": "raid_bdev1", 00:15:52.290 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:52.290 "strip_size_kb": 64, 00:15:52.290 "state": "online", 00:15:52.290 "raid_level": "raid5f", 00:15:52.290 "superblock": true, 00:15:52.290 "num_base_bdevs": 3, 00:15:52.290 "num_base_bdevs_discovered": 3, 00:15:52.290 "num_base_bdevs_operational": 3, 00:15:52.290 "process": { 00:15:52.290 "type": "rebuild", 00:15:52.290 "target": "spare", 00:15:52.290 "progress": { 00:15:52.290 "blocks": 20480, 00:15:52.290 "percent": 16 00:15:52.290 } 00:15:52.290 }, 00:15:52.290 "base_bdevs_list": [ 00:15:52.290 { 00:15:52.290 "name": "spare", 00:15:52.290 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:52.290 "is_configured": true, 00:15:52.290 "data_offset": 2048, 00:15:52.290 "data_size": 63488 00:15:52.290 }, 00:15:52.290 { 00:15:52.290 "name": "BaseBdev2", 00:15:52.290 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:52.290 "is_configured": true, 00:15:52.290 "data_offset": 2048, 00:15:52.290 "data_size": 63488 00:15:52.290 }, 00:15:52.290 { 00:15:52.290 "name": "BaseBdev3", 00:15:52.290 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:52.290 "is_configured": true, 00:15:52.290 "data_offset": 2048, 00:15:52.290 "data_size": 63488 00:15:52.290 } 00:15:52.290 ] 00:15:52.290 }' 00:15:52.290 12:41:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.290 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.290 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.550 [2024-12-14 12:41:52.069935] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.550 [2024-12-14 12:41:52.115965] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:52.550 [2024-12-14 12:41:52.116172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.550 [2024-12-14 12:41:52.116243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.550 [2024-12-14 12:41:52.116280] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.550 "name": "raid_bdev1", 00:15:52.550 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:52.550 "strip_size_kb": 64, 00:15:52.550 "state": "online", 00:15:52.550 "raid_level": "raid5f", 00:15:52.550 "superblock": true, 00:15:52.550 "num_base_bdevs": 3, 00:15:52.550 "num_base_bdevs_discovered": 2, 00:15:52.550 "num_base_bdevs_operational": 2, 00:15:52.550 "base_bdevs_list": [ 00:15:52.550 { 00:15:52.550 "name": null, 00:15:52.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.550 "is_configured": false, 00:15:52.550 "data_offset": 0, 00:15:52.550 "data_size": 63488 00:15:52.550 }, 00:15:52.550 { 00:15:52.550 "name": "BaseBdev2", 00:15:52.550 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:52.550 "is_configured": true, 00:15:52.550 "data_offset": 2048, 00:15:52.550 "data_size": 63488 00:15:52.550 }, 00:15:52.550 { 00:15:52.550 "name": "BaseBdev3", 00:15:52.550 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:52.550 "is_configured": true, 00:15:52.550 "data_offset": 2048, 00:15:52.550 "data_size": 63488 00:15:52.550 } 00:15:52.550 ] 00:15:52.550 }' 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.550 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.119 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:53.119 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.119 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.119 [2024-12-14 12:41:52.588485] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:53.119 [2024-12-14 12:41:52.588561] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:53.119 [2024-12-14 12:41:52.588586] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:53.119 [2024-12-14 12:41:52.588601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:53.119 [2024-12-14 12:41:52.589170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:53.119 [2024-12-14 12:41:52.589195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:53.119 [2024-12-14 12:41:52.589305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:53.119 [2024-12-14 12:41:52.589337] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.119 [2024-12-14 12:41:52.589348] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:53.119 [2024-12-14 12:41:52.589373] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.119 [2024-12-14 12:41:52.605705] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:53.119 spare 00:15:53.119 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.119 12:41:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:53.119 [2024-12-14 12:41:52.613536] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.059 "name": "raid_bdev1", 00:15:54.059 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:54.059 "strip_size_kb": 64, 00:15:54.059 "state": "online", 00:15:54.059 "raid_level": "raid5f", 00:15:54.059 "superblock": true, 00:15:54.059 "num_base_bdevs": 3, 00:15:54.059 "num_base_bdevs_discovered": 3, 00:15:54.059 "num_base_bdevs_operational": 3, 00:15:54.059 "process": { 00:15:54.059 "type": "rebuild", 00:15:54.059 "target": "spare", 00:15:54.059 "progress": { 00:15:54.059 "blocks": 20480, 00:15:54.059 "percent": 16 00:15:54.059 } 00:15:54.059 }, 00:15:54.059 "base_bdevs_list": [ 00:15:54.059 { 00:15:54.059 "name": "spare", 00:15:54.059 "uuid": "4652c0d4-2856-551f-8e96-2721145f0666", 00:15:54.059 "is_configured": true, 00:15:54.059 "data_offset": 2048, 00:15:54.059 "data_size": 63488 00:15:54.059 }, 00:15:54.059 { 00:15:54.059 "name": "BaseBdev2", 00:15:54.059 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:54.059 "is_configured": true, 00:15:54.059 "data_offset": 2048, 00:15:54.059 "data_size": 63488 00:15:54.059 }, 00:15:54.059 { 00:15:54.059 "name": "BaseBdev3", 00:15:54.059 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:54.059 "is_configured": true, 00:15:54.059 "data_offset": 2048, 00:15:54.059 "data_size": 63488 00:15:54.059 } 00:15:54.059 ] 00:15:54.059 }' 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.059 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.059 [2024-12-14 12:41:53.752423] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.318 [2024-12-14 12:41:53.822190] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:54.318 [2024-12-14 12:41:53.822247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.318 [2024-12-14 12:41:53.822265] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.318 [2024-12-14 12:41:53.822272] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.318 "name": "raid_bdev1", 00:15:54.318 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:54.318 "strip_size_kb": 64, 00:15:54.318 "state": "online", 00:15:54.318 "raid_level": "raid5f", 00:15:54.318 "superblock": true, 00:15:54.318 "num_base_bdevs": 3, 00:15:54.318 "num_base_bdevs_discovered": 2, 00:15:54.318 "num_base_bdevs_operational": 2, 00:15:54.318 "base_bdevs_list": [ 00:15:54.318 { 00:15:54.318 "name": null, 00:15:54.318 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.318 "is_configured": false, 00:15:54.318 "data_offset": 0, 00:15:54.318 "data_size": 63488 00:15:54.318 }, 00:15:54.318 { 00:15:54.318 "name": "BaseBdev2", 00:15:54.318 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:54.318 "is_configured": true, 00:15:54.318 "data_offset": 2048, 00:15:54.318 "data_size": 63488 00:15:54.318 }, 00:15:54.318 { 00:15:54.318 "name": "BaseBdev3", 00:15:54.318 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:54.318 "is_configured": true, 00:15:54.318 "data_offset": 2048, 00:15:54.318 "data_size": 63488 00:15:54.318 } 00:15:54.318 ] 00:15:54.318 }' 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.318 12:41:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.578 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.837 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.837 "name": "raid_bdev1", 00:15:54.837 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:54.837 "strip_size_kb": 64, 00:15:54.837 "state": "online", 00:15:54.837 "raid_level": "raid5f", 00:15:54.837 "superblock": true, 00:15:54.837 "num_base_bdevs": 3, 00:15:54.837 "num_base_bdevs_discovered": 2, 00:15:54.837 "num_base_bdevs_operational": 2, 00:15:54.837 "base_bdevs_list": [ 00:15:54.837 { 00:15:54.837 "name": null, 00:15:54.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.837 "is_configured": false, 00:15:54.837 "data_offset": 0, 00:15:54.837 "data_size": 63488 00:15:54.837 }, 00:15:54.837 { 00:15:54.837 "name": "BaseBdev2", 00:15:54.837 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:54.837 "is_configured": true, 00:15:54.837 "data_offset": 2048, 00:15:54.837 "data_size": 63488 00:15:54.837 }, 00:15:54.837 { 00:15:54.837 "name": "BaseBdev3", 00:15:54.838 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:54.838 "is_configured": true, 00:15:54.838 "data_offset": 2048, 00:15:54.838 "data_size": 63488 00:15:54.838 } 00:15:54.838 ] 00:15:54.838 }' 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.838 [2024-12-14 12:41:54.439457] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:54.838 [2024-12-14 12:41:54.439557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.838 [2024-12-14 12:41:54.439587] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:54.838 [2024-12-14 12:41:54.439597] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.838 [2024-12-14 12:41:54.440122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.838 [2024-12-14 12:41:54.440151] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:54.838 [2024-12-14 12:41:54.440247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:54.838 [2024-12-14 12:41:54.440265] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.838 [2024-12-14 12:41:54.440287] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:54.838 [2024-12-14 12:41:54.440298] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:54.838 BaseBdev1 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.838 12:41:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.780 "name": "raid_bdev1", 00:15:55.780 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:55.780 "strip_size_kb": 64, 00:15:55.780 "state": "online", 00:15:55.780 "raid_level": "raid5f", 00:15:55.780 "superblock": true, 00:15:55.780 "num_base_bdevs": 3, 00:15:55.780 "num_base_bdevs_discovered": 2, 00:15:55.780 "num_base_bdevs_operational": 2, 00:15:55.780 "base_bdevs_list": [ 00:15:55.780 { 00:15:55.780 "name": null, 00:15:55.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.780 "is_configured": false, 00:15:55.780 "data_offset": 0, 00:15:55.780 "data_size": 63488 00:15:55.780 }, 00:15:55.780 { 00:15:55.780 "name": "BaseBdev2", 00:15:55.780 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:55.780 "is_configured": true, 00:15:55.780 "data_offset": 2048, 00:15:55.780 "data_size": 63488 00:15:55.780 }, 00:15:55.780 { 00:15:55.780 "name": "BaseBdev3", 00:15:55.780 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:55.780 "is_configured": true, 00:15:55.780 "data_offset": 2048, 00:15:55.780 "data_size": 63488 00:15:55.780 } 00:15:55.780 ] 00:15:55.780 }' 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.780 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.393 "name": "raid_bdev1", 00:15:56.393 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:56.393 "strip_size_kb": 64, 00:15:56.393 "state": "online", 00:15:56.393 "raid_level": "raid5f", 00:15:56.393 "superblock": true, 00:15:56.393 "num_base_bdevs": 3, 00:15:56.393 "num_base_bdevs_discovered": 2, 00:15:56.393 "num_base_bdevs_operational": 2, 00:15:56.393 "base_bdevs_list": [ 00:15:56.393 { 00:15:56.393 "name": null, 00:15:56.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.393 "is_configured": false, 00:15:56.393 "data_offset": 0, 00:15:56.393 "data_size": 63488 00:15:56.393 }, 00:15:56.393 { 00:15:56.393 "name": "BaseBdev2", 00:15:56.393 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:56.393 "is_configured": true, 00:15:56.393 "data_offset": 2048, 00:15:56.393 "data_size": 63488 00:15:56.393 }, 00:15:56.393 { 00:15:56.393 "name": "BaseBdev3", 00:15:56.393 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:56.393 "is_configured": true, 00:15:56.393 "data_offset": 2048, 00:15:56.393 "data_size": 63488 00:15:56.393 } 00:15:56.393 ] 00:15:56.393 }' 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.393 12:41:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.393 [2024-12-14 12:41:56.025080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:56.393 [2024-12-14 12:41:56.025289] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:56.393 [2024-12-14 12:41:56.025353] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:56.393 request: 00:15:56.393 { 00:15:56.393 "base_bdev": "BaseBdev1", 00:15:56.393 "raid_bdev": "raid_bdev1", 00:15:56.393 "method": "bdev_raid_add_base_bdev", 00:15:56.393 "req_id": 1 00:15:56.393 } 00:15:56.393 Got JSON-RPC error response 00:15:56.393 response: 00:15:56.393 { 00:15:56.393 "code": -22, 00:15:56.393 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:56.393 } 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:56.393 12:41:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.332 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.592 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.592 "name": "raid_bdev1", 00:15:57.592 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:57.592 "strip_size_kb": 64, 00:15:57.592 "state": "online", 00:15:57.592 "raid_level": "raid5f", 00:15:57.592 "superblock": true, 00:15:57.592 "num_base_bdevs": 3, 00:15:57.592 "num_base_bdevs_discovered": 2, 00:15:57.592 "num_base_bdevs_operational": 2, 00:15:57.592 "base_bdevs_list": [ 00:15:57.592 { 00:15:57.592 "name": null, 00:15:57.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.592 "is_configured": false, 00:15:57.592 "data_offset": 0, 00:15:57.592 "data_size": 63488 00:15:57.592 }, 00:15:57.592 { 00:15:57.592 "name": "BaseBdev2", 00:15:57.592 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:57.592 "is_configured": true, 00:15:57.592 "data_offset": 2048, 00:15:57.592 "data_size": 63488 00:15:57.592 }, 00:15:57.592 { 00:15:57.592 "name": "BaseBdev3", 00:15:57.592 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:57.592 "is_configured": true, 00:15:57.592 "data_offset": 2048, 00:15:57.592 "data_size": 63488 00:15:57.592 } 00:15:57.592 ] 00:15:57.592 }' 00:15:57.592 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.592 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.851 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.851 "name": "raid_bdev1", 00:15:57.851 "uuid": "b87bfb11-7c98-488e-be0c-599b60f73cf8", 00:15:57.851 "strip_size_kb": 64, 00:15:57.851 "state": "online", 00:15:57.851 "raid_level": "raid5f", 00:15:57.851 "superblock": true, 00:15:57.851 "num_base_bdevs": 3, 00:15:57.851 "num_base_bdevs_discovered": 2, 00:15:57.851 "num_base_bdevs_operational": 2, 00:15:57.851 "base_bdevs_list": [ 00:15:57.851 { 00:15:57.851 "name": null, 00:15:57.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.851 "is_configured": false, 00:15:57.851 "data_offset": 0, 00:15:57.851 "data_size": 63488 00:15:57.851 }, 00:15:57.851 { 00:15:57.851 "name": "BaseBdev2", 00:15:57.852 "uuid": "5ce2ba94-5dad-5dbb-9d98-1308a6d4aa8b", 00:15:57.852 "is_configured": true, 00:15:57.852 "data_offset": 2048, 00:15:57.852 "data_size": 63488 00:15:57.852 }, 00:15:57.852 { 00:15:57.852 "name": "BaseBdev3", 00:15:57.852 "uuid": "f9d139bb-5913-5a7a-9356-3cd103ea42a3", 00:15:57.852 "is_configured": true, 00:15:57.852 "data_offset": 2048, 00:15:57.852 "data_size": 63488 00:15:57.852 } 00:15:57.852 ] 00:15:57.852 }' 00:15:57.852 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.852 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:57.852 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.111 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.111 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 83757 00:15:58.111 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83757 ']' 00:15:58.111 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 83757 00:15:58.111 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:58.111 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:58.112 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83757 00:15:58.112 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:58.112 killing process with pid 83757 00:15:58.112 Received shutdown signal, test time was about 60.000000 seconds 00:15:58.112 00:15:58.112 Latency(us) 00:15:58.112 [2024-12-14T12:41:57.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.112 [2024-12-14T12:41:57.850Z] =================================================================================================================== 00:15:58.112 [2024-12-14T12:41:57.850Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:58.112 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:58.112 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83757' 00:15:58.112 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 83757 00:15:58.112 [2024-12-14 12:41:57.651715] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.112 [2024-12-14 12:41:57.651854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.112 12:41:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 83757 00:15:58.112 [2024-12-14 12:41:57.651940] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.112 [2024-12-14 12:41:57.651954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:58.371 [2024-12-14 12:41:58.027838] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:59.752 12:41:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:59.752 00:15:59.752 real 0m23.151s 00:15:59.752 user 0m29.726s 00:15:59.752 sys 0m2.687s 00:15:59.752 12:41:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:59.752 ************************************ 00:15:59.752 END TEST raid5f_rebuild_test_sb 00:15:59.752 ************************************ 00:15:59.752 12:41:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.753 12:41:59 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:59.753 12:41:59 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:59.753 12:41:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:59.753 12:41:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:59.753 12:41:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:59.753 ************************************ 00:15:59.753 START TEST raid5f_state_function_test 00:15:59.753 ************************************ 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=84505 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 84505' 00:15:59.753 Process raid pid: 84505 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 84505 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 84505 ']' 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:59.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:59.753 12:41:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.753 [2024-12-14 12:41:59.251390] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:59.753 [2024-12-14 12:41:59.251570] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:59.753 [2024-12-14 12:41:59.425002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.013 [2024-12-14 12:41:59.532905] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.013 [2024-12-14 12:41:59.728080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.013 [2024-12-14 12:41:59.728162] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.583 [2024-12-14 12:42:00.082755] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.583 [2024-12-14 12:42:00.082809] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.583 [2024-12-14 12:42:00.082820] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.583 [2024-12-14 12:42:00.082829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.583 [2024-12-14 12:42:00.082836] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.583 [2024-12-14 12:42:00.082844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.583 [2024-12-14 12:42:00.082850] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.583 [2024-12-14 12:42:00.082859] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.583 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:00.583 "name": "Existed_Raid", 00:16:00.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.583 "strip_size_kb": 64, 00:16:00.583 "state": "configuring", 00:16:00.583 "raid_level": "raid5f", 00:16:00.583 "superblock": false, 00:16:00.583 "num_base_bdevs": 4, 00:16:00.583 "num_base_bdevs_discovered": 0, 00:16:00.583 "num_base_bdevs_operational": 4, 00:16:00.583 "base_bdevs_list": [ 00:16:00.583 { 00:16:00.583 "name": "BaseBdev1", 00:16:00.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.583 "is_configured": false, 00:16:00.583 "data_offset": 0, 00:16:00.583 "data_size": 0 00:16:00.583 }, 00:16:00.583 { 00:16:00.583 "name": "BaseBdev2", 00:16:00.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.583 "is_configured": false, 00:16:00.583 "data_offset": 0, 00:16:00.583 "data_size": 0 00:16:00.583 }, 00:16:00.583 { 00:16:00.583 "name": "BaseBdev3", 00:16:00.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.583 "is_configured": false, 00:16:00.583 "data_offset": 0, 00:16:00.583 "data_size": 0 00:16:00.583 }, 00:16:00.583 { 00:16:00.583 "name": "BaseBdev4", 00:16:00.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.584 "is_configured": false, 00:16:00.584 "data_offset": 0, 00:16:00.584 "data_size": 0 00:16:00.584 } 00:16:00.584 ] 00:16:00.584 }' 00:16:00.584 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:00.584 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.844 [2024-12-14 12:42:00.509982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:00.844 [2024-12-14 12:42:00.510086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.844 [2024-12-14 12:42:00.517960] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.844 [2024-12-14 12:42:00.518046] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.844 [2024-12-14 12:42:00.518074] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.844 [2024-12-14 12:42:00.518097] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.844 [2024-12-14 12:42:00.518114] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.844 [2024-12-14 12:42:00.518134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.844 [2024-12-14 12:42:00.518151] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.844 [2024-12-14 12:42:00.518172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.844 [2024-12-14 12:42:00.560236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.844 BaseBdev1 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.844 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.104 [ 00:16:01.104 { 00:16:01.104 "name": "BaseBdev1", 00:16:01.104 "aliases": [ 00:16:01.104 "8c3196c0-f54d-486b-a706-f5828a698b53" 00:16:01.104 ], 00:16:01.104 "product_name": "Malloc disk", 00:16:01.104 "block_size": 512, 00:16:01.104 "num_blocks": 65536, 00:16:01.104 "uuid": "8c3196c0-f54d-486b-a706-f5828a698b53", 00:16:01.104 "assigned_rate_limits": { 00:16:01.104 "rw_ios_per_sec": 0, 00:16:01.104 "rw_mbytes_per_sec": 0, 00:16:01.104 "r_mbytes_per_sec": 0, 00:16:01.104 "w_mbytes_per_sec": 0 00:16:01.104 }, 00:16:01.104 "claimed": true, 00:16:01.104 "claim_type": "exclusive_write", 00:16:01.104 "zoned": false, 00:16:01.104 "supported_io_types": { 00:16:01.104 "read": true, 00:16:01.104 "write": true, 00:16:01.104 "unmap": true, 00:16:01.104 "flush": true, 00:16:01.104 "reset": true, 00:16:01.104 "nvme_admin": false, 00:16:01.104 "nvme_io": false, 00:16:01.104 "nvme_io_md": false, 00:16:01.104 "write_zeroes": true, 00:16:01.104 "zcopy": true, 00:16:01.104 "get_zone_info": false, 00:16:01.104 "zone_management": false, 00:16:01.104 "zone_append": false, 00:16:01.104 "compare": false, 00:16:01.104 "compare_and_write": false, 00:16:01.104 "abort": true, 00:16:01.104 "seek_hole": false, 00:16:01.104 "seek_data": false, 00:16:01.104 "copy": true, 00:16:01.104 "nvme_iov_md": false 00:16:01.104 }, 00:16:01.104 "memory_domains": [ 00:16:01.104 { 00:16:01.104 "dma_device_id": "system", 00:16:01.104 "dma_device_type": 1 00:16:01.104 }, 00:16:01.104 { 00:16:01.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.104 "dma_device_type": 2 00:16:01.104 } 00:16:01.104 ], 00:16:01.104 "driver_specific": {} 00:16:01.104 } 00:16:01.104 ] 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.104 "name": "Existed_Raid", 00:16:01.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.104 "strip_size_kb": 64, 00:16:01.104 "state": "configuring", 00:16:01.104 "raid_level": "raid5f", 00:16:01.104 "superblock": false, 00:16:01.104 "num_base_bdevs": 4, 00:16:01.104 "num_base_bdevs_discovered": 1, 00:16:01.104 "num_base_bdevs_operational": 4, 00:16:01.104 "base_bdevs_list": [ 00:16:01.104 { 00:16:01.104 "name": "BaseBdev1", 00:16:01.104 "uuid": "8c3196c0-f54d-486b-a706-f5828a698b53", 00:16:01.104 "is_configured": true, 00:16:01.104 "data_offset": 0, 00:16:01.104 "data_size": 65536 00:16:01.104 }, 00:16:01.104 { 00:16:01.104 "name": "BaseBdev2", 00:16:01.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.104 "is_configured": false, 00:16:01.104 "data_offset": 0, 00:16:01.104 "data_size": 0 00:16:01.104 }, 00:16:01.104 { 00:16:01.104 "name": "BaseBdev3", 00:16:01.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.104 "is_configured": false, 00:16:01.104 "data_offset": 0, 00:16:01.104 "data_size": 0 00:16:01.104 }, 00:16:01.104 { 00:16:01.104 "name": "BaseBdev4", 00:16:01.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.104 "is_configured": false, 00:16:01.104 "data_offset": 0, 00:16:01.104 "data_size": 0 00:16:01.104 } 00:16:01.104 ] 00:16:01.104 }' 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.104 12:42:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.363 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:01.363 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.363 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.363 [2024-12-14 12:42:01.035468] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.363 [2024-12-14 12:42:01.035522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:01.363 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.363 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:01.363 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.363 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.363 [2024-12-14 12:42:01.043500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:01.364 [2024-12-14 12:42:01.045287] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:01.364 [2024-12-14 12:42:01.045373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:01.364 [2024-12-14 12:42:01.045386] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:01.364 [2024-12-14 12:42:01.045399] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:01.364 [2024-12-14 12:42:01.045406] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:01.364 [2024-12-14 12:42:01.045415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.364 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.623 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.623 "name": "Existed_Raid", 00:16:01.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.623 "strip_size_kb": 64, 00:16:01.623 "state": "configuring", 00:16:01.623 "raid_level": "raid5f", 00:16:01.623 "superblock": false, 00:16:01.623 "num_base_bdevs": 4, 00:16:01.623 "num_base_bdevs_discovered": 1, 00:16:01.623 "num_base_bdevs_operational": 4, 00:16:01.623 "base_bdevs_list": [ 00:16:01.623 { 00:16:01.623 "name": "BaseBdev1", 00:16:01.623 "uuid": "8c3196c0-f54d-486b-a706-f5828a698b53", 00:16:01.623 "is_configured": true, 00:16:01.623 "data_offset": 0, 00:16:01.623 "data_size": 65536 00:16:01.623 }, 00:16:01.623 { 00:16:01.623 "name": "BaseBdev2", 00:16:01.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.623 "is_configured": false, 00:16:01.623 "data_offset": 0, 00:16:01.623 "data_size": 0 00:16:01.623 }, 00:16:01.623 { 00:16:01.623 "name": "BaseBdev3", 00:16:01.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.623 "is_configured": false, 00:16:01.623 "data_offset": 0, 00:16:01.623 "data_size": 0 00:16:01.623 }, 00:16:01.623 { 00:16:01.623 "name": "BaseBdev4", 00:16:01.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.623 "is_configured": false, 00:16:01.623 "data_offset": 0, 00:16:01.623 "data_size": 0 00:16:01.623 } 00:16:01.623 ] 00:16:01.623 }' 00:16:01.623 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.623 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.882 [2024-12-14 12:42:01.534121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:01.882 BaseBdev2 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.882 [ 00:16:01.882 { 00:16:01.882 "name": "BaseBdev2", 00:16:01.882 "aliases": [ 00:16:01.882 "23041855-f021-4f49-9e7c-81fbddc5d5ad" 00:16:01.882 ], 00:16:01.882 "product_name": "Malloc disk", 00:16:01.882 "block_size": 512, 00:16:01.882 "num_blocks": 65536, 00:16:01.882 "uuid": "23041855-f021-4f49-9e7c-81fbddc5d5ad", 00:16:01.882 "assigned_rate_limits": { 00:16:01.882 "rw_ios_per_sec": 0, 00:16:01.882 "rw_mbytes_per_sec": 0, 00:16:01.882 "r_mbytes_per_sec": 0, 00:16:01.882 "w_mbytes_per_sec": 0 00:16:01.882 }, 00:16:01.882 "claimed": true, 00:16:01.882 "claim_type": "exclusive_write", 00:16:01.882 "zoned": false, 00:16:01.882 "supported_io_types": { 00:16:01.882 "read": true, 00:16:01.882 "write": true, 00:16:01.882 "unmap": true, 00:16:01.882 "flush": true, 00:16:01.882 "reset": true, 00:16:01.882 "nvme_admin": false, 00:16:01.882 "nvme_io": false, 00:16:01.882 "nvme_io_md": false, 00:16:01.882 "write_zeroes": true, 00:16:01.882 "zcopy": true, 00:16:01.882 "get_zone_info": false, 00:16:01.882 "zone_management": false, 00:16:01.882 "zone_append": false, 00:16:01.882 "compare": false, 00:16:01.882 "compare_and_write": false, 00:16:01.882 "abort": true, 00:16:01.882 "seek_hole": false, 00:16:01.882 "seek_data": false, 00:16:01.882 "copy": true, 00:16:01.882 "nvme_iov_md": false 00:16:01.882 }, 00:16:01.882 "memory_domains": [ 00:16:01.882 { 00:16:01.882 "dma_device_id": "system", 00:16:01.882 "dma_device_type": 1 00:16:01.882 }, 00:16:01.882 { 00:16:01.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.882 "dma_device_type": 2 00:16:01.882 } 00:16:01.882 ], 00:16:01.882 "driver_specific": {} 00:16:01.882 } 00:16:01.882 ] 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.882 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.141 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.141 "name": "Existed_Raid", 00:16:02.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.141 "strip_size_kb": 64, 00:16:02.141 "state": "configuring", 00:16:02.141 "raid_level": "raid5f", 00:16:02.141 "superblock": false, 00:16:02.141 "num_base_bdevs": 4, 00:16:02.141 "num_base_bdevs_discovered": 2, 00:16:02.141 "num_base_bdevs_operational": 4, 00:16:02.141 "base_bdevs_list": [ 00:16:02.141 { 00:16:02.141 "name": "BaseBdev1", 00:16:02.141 "uuid": "8c3196c0-f54d-486b-a706-f5828a698b53", 00:16:02.141 "is_configured": true, 00:16:02.141 "data_offset": 0, 00:16:02.141 "data_size": 65536 00:16:02.141 }, 00:16:02.141 { 00:16:02.141 "name": "BaseBdev2", 00:16:02.141 "uuid": "23041855-f021-4f49-9e7c-81fbddc5d5ad", 00:16:02.141 "is_configured": true, 00:16:02.141 "data_offset": 0, 00:16:02.141 "data_size": 65536 00:16:02.141 }, 00:16:02.141 { 00:16:02.141 "name": "BaseBdev3", 00:16:02.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.141 "is_configured": false, 00:16:02.141 "data_offset": 0, 00:16:02.141 "data_size": 0 00:16:02.141 }, 00:16:02.141 { 00:16:02.141 "name": "BaseBdev4", 00:16:02.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.141 "is_configured": false, 00:16:02.141 "data_offset": 0, 00:16:02.141 "data_size": 0 00:16:02.141 } 00:16:02.141 ] 00:16:02.141 }' 00:16:02.141 12:42:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.141 12:42:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.400 [2024-12-14 12:42:02.106246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:02.400 BaseBdev3 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.400 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.400 [ 00:16:02.400 { 00:16:02.400 "name": "BaseBdev3", 00:16:02.400 "aliases": [ 00:16:02.400 "fa3d2acc-c4a5-4cdf-bb1a-c2e106ff641c" 00:16:02.400 ], 00:16:02.400 "product_name": "Malloc disk", 00:16:02.400 "block_size": 512, 00:16:02.400 "num_blocks": 65536, 00:16:02.400 "uuid": "fa3d2acc-c4a5-4cdf-bb1a-c2e106ff641c", 00:16:02.400 "assigned_rate_limits": { 00:16:02.400 "rw_ios_per_sec": 0, 00:16:02.400 "rw_mbytes_per_sec": 0, 00:16:02.400 "r_mbytes_per_sec": 0, 00:16:02.400 "w_mbytes_per_sec": 0 00:16:02.400 }, 00:16:02.400 "claimed": true, 00:16:02.400 "claim_type": "exclusive_write", 00:16:02.400 "zoned": false, 00:16:02.400 "supported_io_types": { 00:16:02.400 "read": true, 00:16:02.658 "write": true, 00:16:02.658 "unmap": true, 00:16:02.658 "flush": true, 00:16:02.658 "reset": true, 00:16:02.658 "nvme_admin": false, 00:16:02.658 "nvme_io": false, 00:16:02.658 "nvme_io_md": false, 00:16:02.658 "write_zeroes": true, 00:16:02.658 "zcopy": true, 00:16:02.658 "get_zone_info": false, 00:16:02.658 "zone_management": false, 00:16:02.658 "zone_append": false, 00:16:02.658 "compare": false, 00:16:02.658 "compare_and_write": false, 00:16:02.658 "abort": true, 00:16:02.658 "seek_hole": false, 00:16:02.658 "seek_data": false, 00:16:02.658 "copy": true, 00:16:02.658 "nvme_iov_md": false 00:16:02.658 }, 00:16:02.658 "memory_domains": [ 00:16:02.658 { 00:16:02.658 "dma_device_id": "system", 00:16:02.658 "dma_device_type": 1 00:16:02.658 }, 00:16:02.658 { 00:16:02.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.658 "dma_device_type": 2 00:16:02.658 } 00:16:02.658 ], 00:16:02.658 "driver_specific": {} 00:16:02.658 } 00:16:02.658 ] 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.658 "name": "Existed_Raid", 00:16:02.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.658 "strip_size_kb": 64, 00:16:02.658 "state": "configuring", 00:16:02.658 "raid_level": "raid5f", 00:16:02.658 "superblock": false, 00:16:02.658 "num_base_bdevs": 4, 00:16:02.658 "num_base_bdevs_discovered": 3, 00:16:02.658 "num_base_bdevs_operational": 4, 00:16:02.658 "base_bdevs_list": [ 00:16:02.658 { 00:16:02.658 "name": "BaseBdev1", 00:16:02.658 "uuid": "8c3196c0-f54d-486b-a706-f5828a698b53", 00:16:02.658 "is_configured": true, 00:16:02.658 "data_offset": 0, 00:16:02.658 "data_size": 65536 00:16:02.658 }, 00:16:02.658 { 00:16:02.658 "name": "BaseBdev2", 00:16:02.658 "uuid": "23041855-f021-4f49-9e7c-81fbddc5d5ad", 00:16:02.658 "is_configured": true, 00:16:02.658 "data_offset": 0, 00:16:02.658 "data_size": 65536 00:16:02.658 }, 00:16:02.658 { 00:16:02.658 "name": "BaseBdev3", 00:16:02.658 "uuid": "fa3d2acc-c4a5-4cdf-bb1a-c2e106ff641c", 00:16:02.658 "is_configured": true, 00:16:02.658 "data_offset": 0, 00:16:02.658 "data_size": 65536 00:16:02.658 }, 00:16:02.658 { 00:16:02.658 "name": "BaseBdev4", 00:16:02.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.658 "is_configured": false, 00:16:02.658 "data_offset": 0, 00:16:02.658 "data_size": 0 00:16:02.658 } 00:16:02.658 ] 00:16:02.658 }' 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.658 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.918 [2024-12-14 12:42:02.612620] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:02.918 [2024-12-14 12:42:02.612682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:02.918 [2024-12-14 12:42:02.612692] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:02.918 [2024-12-14 12:42:02.612935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:02.918 [2024-12-14 12:42:02.620228] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:02.918 [2024-12-14 12:42:02.620264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:02.918 [2024-12-14 12:42:02.620540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:02.918 BaseBdev4 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.918 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.918 [ 00:16:02.918 { 00:16:02.918 "name": "BaseBdev4", 00:16:02.918 "aliases": [ 00:16:02.918 "31ced395-8860-47e5-a9f2-914dfa62fc53" 00:16:02.918 ], 00:16:02.918 "product_name": "Malloc disk", 00:16:02.918 "block_size": 512, 00:16:02.918 "num_blocks": 65536, 00:16:02.918 "uuid": "31ced395-8860-47e5-a9f2-914dfa62fc53", 00:16:02.918 "assigned_rate_limits": { 00:16:02.918 "rw_ios_per_sec": 0, 00:16:02.918 "rw_mbytes_per_sec": 0, 00:16:02.918 "r_mbytes_per_sec": 0, 00:16:02.918 "w_mbytes_per_sec": 0 00:16:02.918 }, 00:16:02.918 "claimed": true, 00:16:02.918 "claim_type": "exclusive_write", 00:16:02.918 "zoned": false, 00:16:02.918 "supported_io_types": { 00:16:02.918 "read": true, 00:16:02.918 "write": true, 00:16:02.918 "unmap": true, 00:16:02.918 "flush": true, 00:16:02.918 "reset": true, 00:16:02.918 "nvme_admin": false, 00:16:02.918 "nvme_io": false, 00:16:02.918 "nvme_io_md": false, 00:16:02.918 "write_zeroes": true, 00:16:02.918 "zcopy": true, 00:16:02.918 "get_zone_info": false, 00:16:02.918 "zone_management": false, 00:16:02.918 "zone_append": false, 00:16:02.918 "compare": false, 00:16:02.918 "compare_and_write": false, 00:16:02.918 "abort": true, 00:16:02.918 "seek_hole": false, 00:16:02.918 "seek_data": false, 00:16:02.918 "copy": true, 00:16:02.918 "nvme_iov_md": false 00:16:02.918 }, 00:16:02.918 "memory_domains": [ 00:16:02.918 { 00:16:03.178 "dma_device_id": "system", 00:16:03.178 "dma_device_type": 1 00:16:03.178 }, 00:16:03.178 { 00:16:03.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.178 "dma_device_type": 2 00:16:03.178 } 00:16:03.178 ], 00:16:03.178 "driver_specific": {} 00:16:03.178 } 00:16:03.178 ] 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.178 "name": "Existed_Raid", 00:16:03.178 "uuid": "8814541d-310d-4bec-9873-3f121ad90462", 00:16:03.178 "strip_size_kb": 64, 00:16:03.178 "state": "online", 00:16:03.178 "raid_level": "raid5f", 00:16:03.178 "superblock": false, 00:16:03.178 "num_base_bdevs": 4, 00:16:03.178 "num_base_bdevs_discovered": 4, 00:16:03.178 "num_base_bdevs_operational": 4, 00:16:03.178 "base_bdevs_list": [ 00:16:03.178 { 00:16:03.178 "name": "BaseBdev1", 00:16:03.178 "uuid": "8c3196c0-f54d-486b-a706-f5828a698b53", 00:16:03.178 "is_configured": true, 00:16:03.178 "data_offset": 0, 00:16:03.178 "data_size": 65536 00:16:03.178 }, 00:16:03.178 { 00:16:03.178 "name": "BaseBdev2", 00:16:03.178 "uuid": "23041855-f021-4f49-9e7c-81fbddc5d5ad", 00:16:03.178 "is_configured": true, 00:16:03.178 "data_offset": 0, 00:16:03.178 "data_size": 65536 00:16:03.178 }, 00:16:03.178 { 00:16:03.178 "name": "BaseBdev3", 00:16:03.178 "uuid": "fa3d2acc-c4a5-4cdf-bb1a-c2e106ff641c", 00:16:03.178 "is_configured": true, 00:16:03.178 "data_offset": 0, 00:16:03.178 "data_size": 65536 00:16:03.178 }, 00:16:03.178 { 00:16:03.178 "name": "BaseBdev4", 00:16:03.178 "uuid": "31ced395-8860-47e5-a9f2-914dfa62fc53", 00:16:03.178 "is_configured": true, 00:16:03.178 "data_offset": 0, 00:16:03.178 "data_size": 65536 00:16:03.178 } 00:16:03.178 ] 00:16:03.178 }' 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.178 12:42:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:03.438 [2024-12-14 12:42:03.088560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.438 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:03.438 "name": "Existed_Raid", 00:16:03.438 "aliases": [ 00:16:03.438 "8814541d-310d-4bec-9873-3f121ad90462" 00:16:03.438 ], 00:16:03.438 "product_name": "Raid Volume", 00:16:03.438 "block_size": 512, 00:16:03.438 "num_blocks": 196608, 00:16:03.438 "uuid": "8814541d-310d-4bec-9873-3f121ad90462", 00:16:03.438 "assigned_rate_limits": { 00:16:03.438 "rw_ios_per_sec": 0, 00:16:03.438 "rw_mbytes_per_sec": 0, 00:16:03.438 "r_mbytes_per_sec": 0, 00:16:03.438 "w_mbytes_per_sec": 0 00:16:03.438 }, 00:16:03.439 "claimed": false, 00:16:03.439 "zoned": false, 00:16:03.439 "supported_io_types": { 00:16:03.439 "read": true, 00:16:03.439 "write": true, 00:16:03.439 "unmap": false, 00:16:03.439 "flush": false, 00:16:03.439 "reset": true, 00:16:03.439 "nvme_admin": false, 00:16:03.439 "nvme_io": false, 00:16:03.439 "nvme_io_md": false, 00:16:03.439 "write_zeroes": true, 00:16:03.439 "zcopy": false, 00:16:03.439 "get_zone_info": false, 00:16:03.439 "zone_management": false, 00:16:03.439 "zone_append": false, 00:16:03.439 "compare": false, 00:16:03.439 "compare_and_write": false, 00:16:03.439 "abort": false, 00:16:03.439 "seek_hole": false, 00:16:03.439 "seek_data": false, 00:16:03.439 "copy": false, 00:16:03.439 "nvme_iov_md": false 00:16:03.439 }, 00:16:03.439 "driver_specific": { 00:16:03.439 "raid": { 00:16:03.439 "uuid": "8814541d-310d-4bec-9873-3f121ad90462", 00:16:03.439 "strip_size_kb": 64, 00:16:03.439 "state": "online", 00:16:03.439 "raid_level": "raid5f", 00:16:03.439 "superblock": false, 00:16:03.439 "num_base_bdevs": 4, 00:16:03.439 "num_base_bdevs_discovered": 4, 00:16:03.439 "num_base_bdevs_operational": 4, 00:16:03.439 "base_bdevs_list": [ 00:16:03.439 { 00:16:03.439 "name": "BaseBdev1", 00:16:03.439 "uuid": "8c3196c0-f54d-486b-a706-f5828a698b53", 00:16:03.439 "is_configured": true, 00:16:03.439 "data_offset": 0, 00:16:03.439 "data_size": 65536 00:16:03.439 }, 00:16:03.439 { 00:16:03.439 "name": "BaseBdev2", 00:16:03.439 "uuid": "23041855-f021-4f49-9e7c-81fbddc5d5ad", 00:16:03.439 "is_configured": true, 00:16:03.439 "data_offset": 0, 00:16:03.439 "data_size": 65536 00:16:03.439 }, 00:16:03.439 { 00:16:03.439 "name": "BaseBdev3", 00:16:03.439 "uuid": "fa3d2acc-c4a5-4cdf-bb1a-c2e106ff641c", 00:16:03.439 "is_configured": true, 00:16:03.439 "data_offset": 0, 00:16:03.439 "data_size": 65536 00:16:03.439 }, 00:16:03.439 { 00:16:03.439 "name": "BaseBdev4", 00:16:03.439 "uuid": "31ced395-8860-47e5-a9f2-914dfa62fc53", 00:16:03.439 "is_configured": true, 00:16:03.439 "data_offset": 0, 00:16:03.439 "data_size": 65536 00:16:03.439 } 00:16:03.439 ] 00:16:03.439 } 00:16:03.439 } 00:16:03.439 }' 00:16:03.439 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:03.699 BaseBdev2 00:16:03.699 BaseBdev3 00:16:03.699 BaseBdev4' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.699 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.699 [2024-12-14 12:42:03.387867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.959 "name": "Existed_Raid", 00:16:03.959 "uuid": "8814541d-310d-4bec-9873-3f121ad90462", 00:16:03.959 "strip_size_kb": 64, 00:16:03.959 "state": "online", 00:16:03.959 "raid_level": "raid5f", 00:16:03.959 "superblock": false, 00:16:03.959 "num_base_bdevs": 4, 00:16:03.959 "num_base_bdevs_discovered": 3, 00:16:03.959 "num_base_bdevs_operational": 3, 00:16:03.959 "base_bdevs_list": [ 00:16:03.959 { 00:16:03.959 "name": null, 00:16:03.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.959 "is_configured": false, 00:16:03.959 "data_offset": 0, 00:16:03.959 "data_size": 65536 00:16:03.959 }, 00:16:03.959 { 00:16:03.959 "name": "BaseBdev2", 00:16:03.959 "uuid": "23041855-f021-4f49-9e7c-81fbddc5d5ad", 00:16:03.959 "is_configured": true, 00:16:03.959 "data_offset": 0, 00:16:03.959 "data_size": 65536 00:16:03.959 }, 00:16:03.959 { 00:16:03.959 "name": "BaseBdev3", 00:16:03.959 "uuid": "fa3d2acc-c4a5-4cdf-bb1a-c2e106ff641c", 00:16:03.959 "is_configured": true, 00:16:03.959 "data_offset": 0, 00:16:03.959 "data_size": 65536 00:16:03.959 }, 00:16:03.959 { 00:16:03.959 "name": "BaseBdev4", 00:16:03.959 "uuid": "31ced395-8860-47e5-a9f2-914dfa62fc53", 00:16:03.959 "is_configured": true, 00:16:03.959 "data_offset": 0, 00:16:03.959 "data_size": 65536 00:16:03.959 } 00:16:03.959 ] 00:16:03.959 }' 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.959 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.219 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:04.219 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.219 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.219 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.219 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.219 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:04.219 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.479 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:04.480 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.480 12:42:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:04.480 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.480 12:42:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.480 [2024-12-14 12:42:03.975097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:04.480 [2024-12-14 12:42:03.975198] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:04.480 [2024-12-14 12:42:04.065361] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.480 [2024-12-14 12:42:04.125261] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:04.480 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.740 [2024-12-14 12:42:04.273810] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:04.740 [2024-12-14 12:42:04.273864] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.740 BaseBdev2 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.740 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.000 [ 00:16:05.000 { 00:16:05.000 "name": "BaseBdev2", 00:16:05.000 "aliases": [ 00:16:05.000 "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba" 00:16:05.000 ], 00:16:05.000 "product_name": "Malloc disk", 00:16:05.000 "block_size": 512, 00:16:05.000 "num_blocks": 65536, 00:16:05.000 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:05.000 "assigned_rate_limits": { 00:16:05.000 "rw_ios_per_sec": 0, 00:16:05.000 "rw_mbytes_per_sec": 0, 00:16:05.000 "r_mbytes_per_sec": 0, 00:16:05.000 "w_mbytes_per_sec": 0 00:16:05.000 }, 00:16:05.000 "claimed": false, 00:16:05.000 "zoned": false, 00:16:05.000 "supported_io_types": { 00:16:05.000 "read": true, 00:16:05.000 "write": true, 00:16:05.000 "unmap": true, 00:16:05.000 "flush": true, 00:16:05.000 "reset": true, 00:16:05.000 "nvme_admin": false, 00:16:05.000 "nvme_io": false, 00:16:05.000 "nvme_io_md": false, 00:16:05.000 "write_zeroes": true, 00:16:05.000 "zcopy": true, 00:16:05.000 "get_zone_info": false, 00:16:05.000 "zone_management": false, 00:16:05.000 "zone_append": false, 00:16:05.000 "compare": false, 00:16:05.000 "compare_and_write": false, 00:16:05.000 "abort": true, 00:16:05.001 "seek_hole": false, 00:16:05.001 "seek_data": false, 00:16:05.001 "copy": true, 00:16:05.001 "nvme_iov_md": false 00:16:05.001 }, 00:16:05.001 "memory_domains": [ 00:16:05.001 { 00:16:05.001 "dma_device_id": "system", 00:16:05.001 "dma_device_type": 1 00:16:05.001 }, 00:16:05.001 { 00:16:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.001 "dma_device_type": 2 00:16:05.001 } 00:16:05.001 ], 00:16:05.001 "driver_specific": {} 00:16:05.001 } 00:16:05.001 ] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.001 BaseBdev3 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.001 [ 00:16:05.001 { 00:16:05.001 "name": "BaseBdev3", 00:16:05.001 "aliases": [ 00:16:05.001 "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4" 00:16:05.001 ], 00:16:05.001 "product_name": "Malloc disk", 00:16:05.001 "block_size": 512, 00:16:05.001 "num_blocks": 65536, 00:16:05.001 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:05.001 "assigned_rate_limits": { 00:16:05.001 "rw_ios_per_sec": 0, 00:16:05.001 "rw_mbytes_per_sec": 0, 00:16:05.001 "r_mbytes_per_sec": 0, 00:16:05.001 "w_mbytes_per_sec": 0 00:16:05.001 }, 00:16:05.001 "claimed": false, 00:16:05.001 "zoned": false, 00:16:05.001 "supported_io_types": { 00:16:05.001 "read": true, 00:16:05.001 "write": true, 00:16:05.001 "unmap": true, 00:16:05.001 "flush": true, 00:16:05.001 "reset": true, 00:16:05.001 "nvme_admin": false, 00:16:05.001 "nvme_io": false, 00:16:05.001 "nvme_io_md": false, 00:16:05.001 "write_zeroes": true, 00:16:05.001 "zcopy": true, 00:16:05.001 "get_zone_info": false, 00:16:05.001 "zone_management": false, 00:16:05.001 "zone_append": false, 00:16:05.001 "compare": false, 00:16:05.001 "compare_and_write": false, 00:16:05.001 "abort": true, 00:16:05.001 "seek_hole": false, 00:16:05.001 "seek_data": false, 00:16:05.001 "copy": true, 00:16:05.001 "nvme_iov_md": false 00:16:05.001 }, 00:16:05.001 "memory_domains": [ 00:16:05.001 { 00:16:05.001 "dma_device_id": "system", 00:16:05.001 "dma_device_type": 1 00:16:05.001 }, 00:16:05.001 { 00:16:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.001 "dma_device_type": 2 00:16:05.001 } 00:16:05.001 ], 00:16:05.001 "driver_specific": {} 00:16:05.001 } 00:16:05.001 ] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.001 BaseBdev4 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.001 [ 00:16:05.001 { 00:16:05.001 "name": "BaseBdev4", 00:16:05.001 "aliases": [ 00:16:05.001 "3d0412e0-a73c-47ef-9080-13d2a7bf38ad" 00:16:05.001 ], 00:16:05.001 "product_name": "Malloc disk", 00:16:05.001 "block_size": 512, 00:16:05.001 "num_blocks": 65536, 00:16:05.001 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:05.001 "assigned_rate_limits": { 00:16:05.001 "rw_ios_per_sec": 0, 00:16:05.001 "rw_mbytes_per_sec": 0, 00:16:05.001 "r_mbytes_per_sec": 0, 00:16:05.001 "w_mbytes_per_sec": 0 00:16:05.001 }, 00:16:05.001 "claimed": false, 00:16:05.001 "zoned": false, 00:16:05.001 "supported_io_types": { 00:16:05.001 "read": true, 00:16:05.001 "write": true, 00:16:05.001 "unmap": true, 00:16:05.001 "flush": true, 00:16:05.001 "reset": true, 00:16:05.001 "nvme_admin": false, 00:16:05.001 "nvme_io": false, 00:16:05.001 "nvme_io_md": false, 00:16:05.001 "write_zeroes": true, 00:16:05.001 "zcopy": true, 00:16:05.001 "get_zone_info": false, 00:16:05.001 "zone_management": false, 00:16:05.001 "zone_append": false, 00:16:05.001 "compare": false, 00:16:05.001 "compare_and_write": false, 00:16:05.001 "abort": true, 00:16:05.001 "seek_hole": false, 00:16:05.001 "seek_data": false, 00:16:05.001 "copy": true, 00:16:05.001 "nvme_iov_md": false 00:16:05.001 }, 00:16:05.001 "memory_domains": [ 00:16:05.001 { 00:16:05.001 "dma_device_id": "system", 00:16:05.001 "dma_device_type": 1 00:16:05.001 }, 00:16:05.001 { 00:16:05.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.001 "dma_device_type": 2 00:16:05.001 } 00:16:05.001 ], 00:16:05.001 "driver_specific": {} 00:16:05.001 } 00:16:05.001 ] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.001 [2024-12-14 12:42:04.646707] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:05.001 [2024-12-14 12:42:04.646749] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:05.001 [2024-12-14 12:42:04.646768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:05.001 [2024-12-14 12:42:04.648545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:05.001 [2024-12-14 12:42:04.648612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.001 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.002 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.002 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.002 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.002 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.002 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.002 "name": "Existed_Raid", 00:16:05.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.002 "strip_size_kb": 64, 00:16:05.002 "state": "configuring", 00:16:05.002 "raid_level": "raid5f", 00:16:05.002 "superblock": false, 00:16:05.002 "num_base_bdevs": 4, 00:16:05.002 "num_base_bdevs_discovered": 3, 00:16:05.002 "num_base_bdevs_operational": 4, 00:16:05.002 "base_bdevs_list": [ 00:16:05.002 { 00:16:05.002 "name": "BaseBdev1", 00:16:05.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.002 "is_configured": false, 00:16:05.002 "data_offset": 0, 00:16:05.002 "data_size": 0 00:16:05.002 }, 00:16:05.002 { 00:16:05.002 "name": "BaseBdev2", 00:16:05.002 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:05.002 "is_configured": true, 00:16:05.002 "data_offset": 0, 00:16:05.002 "data_size": 65536 00:16:05.002 }, 00:16:05.002 { 00:16:05.002 "name": "BaseBdev3", 00:16:05.002 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:05.002 "is_configured": true, 00:16:05.002 "data_offset": 0, 00:16:05.002 "data_size": 65536 00:16:05.002 }, 00:16:05.002 { 00:16:05.002 "name": "BaseBdev4", 00:16:05.002 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:05.002 "is_configured": true, 00:16:05.002 "data_offset": 0, 00:16:05.002 "data_size": 65536 00:16:05.002 } 00:16:05.002 ] 00:16:05.002 }' 00:16:05.002 12:42:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.002 12:42:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.569 [2024-12-14 12:42:05.105956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.569 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.569 "name": "Existed_Raid", 00:16:05.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.569 "strip_size_kb": 64, 00:16:05.569 "state": "configuring", 00:16:05.569 "raid_level": "raid5f", 00:16:05.569 "superblock": false, 00:16:05.569 "num_base_bdevs": 4, 00:16:05.569 "num_base_bdevs_discovered": 2, 00:16:05.569 "num_base_bdevs_operational": 4, 00:16:05.569 "base_bdevs_list": [ 00:16:05.569 { 00:16:05.569 "name": "BaseBdev1", 00:16:05.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.569 "is_configured": false, 00:16:05.569 "data_offset": 0, 00:16:05.569 "data_size": 0 00:16:05.569 }, 00:16:05.569 { 00:16:05.569 "name": null, 00:16:05.570 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:05.570 "is_configured": false, 00:16:05.570 "data_offset": 0, 00:16:05.570 "data_size": 65536 00:16:05.570 }, 00:16:05.570 { 00:16:05.570 "name": "BaseBdev3", 00:16:05.570 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:05.570 "is_configured": true, 00:16:05.570 "data_offset": 0, 00:16:05.570 "data_size": 65536 00:16:05.570 }, 00:16:05.570 { 00:16:05.570 "name": "BaseBdev4", 00:16:05.570 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:05.570 "is_configured": true, 00:16:05.570 "data_offset": 0, 00:16:05.570 "data_size": 65536 00:16:05.570 } 00:16:05.570 ] 00:16:05.570 }' 00:16:05.570 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.570 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.828 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.828 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:05.828 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.828 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.828 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.087 [2024-12-14 12:42:05.624477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:06.087 BaseBdev1 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.087 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.087 [ 00:16:06.087 { 00:16:06.087 "name": "BaseBdev1", 00:16:06.087 "aliases": [ 00:16:06.087 "6427c0f4-7598-4df2-b776-5f04e8cc9861" 00:16:06.087 ], 00:16:06.087 "product_name": "Malloc disk", 00:16:06.087 "block_size": 512, 00:16:06.087 "num_blocks": 65536, 00:16:06.087 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:06.087 "assigned_rate_limits": { 00:16:06.087 "rw_ios_per_sec": 0, 00:16:06.087 "rw_mbytes_per_sec": 0, 00:16:06.087 "r_mbytes_per_sec": 0, 00:16:06.087 "w_mbytes_per_sec": 0 00:16:06.087 }, 00:16:06.087 "claimed": true, 00:16:06.087 "claim_type": "exclusive_write", 00:16:06.087 "zoned": false, 00:16:06.087 "supported_io_types": { 00:16:06.087 "read": true, 00:16:06.087 "write": true, 00:16:06.087 "unmap": true, 00:16:06.087 "flush": true, 00:16:06.087 "reset": true, 00:16:06.087 "nvme_admin": false, 00:16:06.087 "nvme_io": false, 00:16:06.087 "nvme_io_md": false, 00:16:06.087 "write_zeroes": true, 00:16:06.087 "zcopy": true, 00:16:06.087 "get_zone_info": false, 00:16:06.087 "zone_management": false, 00:16:06.087 "zone_append": false, 00:16:06.087 "compare": false, 00:16:06.087 "compare_and_write": false, 00:16:06.087 "abort": true, 00:16:06.087 "seek_hole": false, 00:16:06.087 "seek_data": false, 00:16:06.088 "copy": true, 00:16:06.088 "nvme_iov_md": false 00:16:06.088 }, 00:16:06.088 "memory_domains": [ 00:16:06.088 { 00:16:06.088 "dma_device_id": "system", 00:16:06.088 "dma_device_type": 1 00:16:06.088 }, 00:16:06.088 { 00:16:06.088 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.088 "dma_device_type": 2 00:16:06.088 } 00:16:06.088 ], 00:16:06.088 "driver_specific": {} 00:16:06.088 } 00:16:06.088 ] 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.088 "name": "Existed_Raid", 00:16:06.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.088 "strip_size_kb": 64, 00:16:06.088 "state": "configuring", 00:16:06.088 "raid_level": "raid5f", 00:16:06.088 "superblock": false, 00:16:06.088 "num_base_bdevs": 4, 00:16:06.088 "num_base_bdevs_discovered": 3, 00:16:06.088 "num_base_bdevs_operational": 4, 00:16:06.088 "base_bdevs_list": [ 00:16:06.088 { 00:16:06.088 "name": "BaseBdev1", 00:16:06.088 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:06.088 "is_configured": true, 00:16:06.088 "data_offset": 0, 00:16:06.088 "data_size": 65536 00:16:06.088 }, 00:16:06.088 { 00:16:06.088 "name": null, 00:16:06.088 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:06.088 "is_configured": false, 00:16:06.088 "data_offset": 0, 00:16:06.088 "data_size": 65536 00:16:06.088 }, 00:16:06.088 { 00:16:06.088 "name": "BaseBdev3", 00:16:06.088 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:06.088 "is_configured": true, 00:16:06.088 "data_offset": 0, 00:16:06.088 "data_size": 65536 00:16:06.088 }, 00:16:06.088 { 00:16:06.088 "name": "BaseBdev4", 00:16:06.088 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:06.088 "is_configured": true, 00:16:06.088 "data_offset": 0, 00:16:06.088 "data_size": 65536 00:16:06.088 } 00:16:06.088 ] 00:16:06.088 }' 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.088 12:42:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.657 [2024-12-14 12:42:06.155656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.657 "name": "Existed_Raid", 00:16:06.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.657 "strip_size_kb": 64, 00:16:06.657 "state": "configuring", 00:16:06.657 "raid_level": "raid5f", 00:16:06.657 "superblock": false, 00:16:06.657 "num_base_bdevs": 4, 00:16:06.657 "num_base_bdevs_discovered": 2, 00:16:06.657 "num_base_bdevs_operational": 4, 00:16:06.657 "base_bdevs_list": [ 00:16:06.657 { 00:16:06.657 "name": "BaseBdev1", 00:16:06.657 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:06.657 "is_configured": true, 00:16:06.657 "data_offset": 0, 00:16:06.657 "data_size": 65536 00:16:06.657 }, 00:16:06.657 { 00:16:06.657 "name": null, 00:16:06.657 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:06.657 "is_configured": false, 00:16:06.657 "data_offset": 0, 00:16:06.657 "data_size": 65536 00:16:06.657 }, 00:16:06.657 { 00:16:06.657 "name": null, 00:16:06.657 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:06.657 "is_configured": false, 00:16:06.657 "data_offset": 0, 00:16:06.657 "data_size": 65536 00:16:06.657 }, 00:16:06.657 { 00:16:06.657 "name": "BaseBdev4", 00:16:06.657 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:06.657 "is_configured": true, 00:16:06.657 "data_offset": 0, 00:16:06.657 "data_size": 65536 00:16:06.657 } 00:16:06.657 ] 00:16:06.657 }' 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.657 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.917 [2024-12-14 12:42:06.622852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.917 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.178 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.178 "name": "Existed_Raid", 00:16:07.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.178 "strip_size_kb": 64, 00:16:07.178 "state": "configuring", 00:16:07.178 "raid_level": "raid5f", 00:16:07.178 "superblock": false, 00:16:07.178 "num_base_bdevs": 4, 00:16:07.178 "num_base_bdevs_discovered": 3, 00:16:07.178 "num_base_bdevs_operational": 4, 00:16:07.178 "base_bdevs_list": [ 00:16:07.178 { 00:16:07.178 "name": "BaseBdev1", 00:16:07.178 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:07.178 "is_configured": true, 00:16:07.178 "data_offset": 0, 00:16:07.178 "data_size": 65536 00:16:07.178 }, 00:16:07.178 { 00:16:07.178 "name": null, 00:16:07.178 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:07.178 "is_configured": false, 00:16:07.178 "data_offset": 0, 00:16:07.178 "data_size": 65536 00:16:07.178 }, 00:16:07.178 { 00:16:07.178 "name": "BaseBdev3", 00:16:07.178 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:07.178 "is_configured": true, 00:16:07.178 "data_offset": 0, 00:16:07.178 "data_size": 65536 00:16:07.178 }, 00:16:07.178 { 00:16:07.178 "name": "BaseBdev4", 00:16:07.178 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:07.178 "is_configured": true, 00:16:07.178 "data_offset": 0, 00:16:07.178 "data_size": 65536 00:16:07.178 } 00:16:07.178 ] 00:16:07.178 }' 00:16:07.178 12:42:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.178 12:42:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.438 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.438 [2024-12-14 12:42:07.170001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.696 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.696 "name": "Existed_Raid", 00:16:07.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.696 "strip_size_kb": 64, 00:16:07.696 "state": "configuring", 00:16:07.696 "raid_level": "raid5f", 00:16:07.696 "superblock": false, 00:16:07.696 "num_base_bdevs": 4, 00:16:07.696 "num_base_bdevs_discovered": 2, 00:16:07.696 "num_base_bdevs_operational": 4, 00:16:07.696 "base_bdevs_list": [ 00:16:07.696 { 00:16:07.696 "name": null, 00:16:07.696 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:07.696 "is_configured": false, 00:16:07.696 "data_offset": 0, 00:16:07.696 "data_size": 65536 00:16:07.696 }, 00:16:07.696 { 00:16:07.696 "name": null, 00:16:07.696 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:07.696 "is_configured": false, 00:16:07.696 "data_offset": 0, 00:16:07.696 "data_size": 65536 00:16:07.696 }, 00:16:07.696 { 00:16:07.696 "name": "BaseBdev3", 00:16:07.696 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:07.696 "is_configured": true, 00:16:07.696 "data_offset": 0, 00:16:07.696 "data_size": 65536 00:16:07.696 }, 00:16:07.696 { 00:16:07.697 "name": "BaseBdev4", 00:16:07.697 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:07.697 "is_configured": true, 00:16:07.697 "data_offset": 0, 00:16:07.697 "data_size": 65536 00:16:07.697 } 00:16:07.697 ] 00:16:07.697 }' 00:16:07.697 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.697 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.955 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.955 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.955 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.955 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.215 [2024-12-14 12:42:07.733683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.215 "name": "Existed_Raid", 00:16:08.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.215 "strip_size_kb": 64, 00:16:08.215 "state": "configuring", 00:16:08.215 "raid_level": "raid5f", 00:16:08.215 "superblock": false, 00:16:08.215 "num_base_bdevs": 4, 00:16:08.215 "num_base_bdevs_discovered": 3, 00:16:08.215 "num_base_bdevs_operational": 4, 00:16:08.215 "base_bdevs_list": [ 00:16:08.215 { 00:16:08.215 "name": null, 00:16:08.215 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:08.215 "is_configured": false, 00:16:08.215 "data_offset": 0, 00:16:08.215 "data_size": 65536 00:16:08.215 }, 00:16:08.215 { 00:16:08.215 "name": "BaseBdev2", 00:16:08.215 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:08.215 "is_configured": true, 00:16:08.215 "data_offset": 0, 00:16:08.215 "data_size": 65536 00:16:08.215 }, 00:16:08.215 { 00:16:08.215 "name": "BaseBdev3", 00:16:08.215 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:08.215 "is_configured": true, 00:16:08.215 "data_offset": 0, 00:16:08.215 "data_size": 65536 00:16:08.215 }, 00:16:08.215 { 00:16:08.215 "name": "BaseBdev4", 00:16:08.215 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:08.215 "is_configured": true, 00:16:08.215 "data_offset": 0, 00:16:08.215 "data_size": 65536 00:16:08.215 } 00:16:08.215 ] 00:16:08.215 }' 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.215 12:42:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.475 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.475 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.475 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:08.475 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.475 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.475 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6427c0f4-7598-4df2-b776-5f04e8cc9861 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.735 [2024-12-14 12:42:08.301411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:08.735 [2024-12-14 12:42:08.301462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:08.735 [2024-12-14 12:42:08.301470] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:08.735 [2024-12-14 12:42:08.301704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:08.735 [2024-12-14 12:42:08.308804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:08.735 [2024-12-14 12:42:08.308831] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:08.735 [2024-12-14 12:42:08.309120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.735 NewBaseBdev 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.735 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.736 [ 00:16:08.736 { 00:16:08.736 "name": "NewBaseBdev", 00:16:08.736 "aliases": [ 00:16:08.736 "6427c0f4-7598-4df2-b776-5f04e8cc9861" 00:16:08.736 ], 00:16:08.736 "product_name": "Malloc disk", 00:16:08.736 "block_size": 512, 00:16:08.736 "num_blocks": 65536, 00:16:08.736 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:08.736 "assigned_rate_limits": { 00:16:08.736 "rw_ios_per_sec": 0, 00:16:08.736 "rw_mbytes_per_sec": 0, 00:16:08.736 "r_mbytes_per_sec": 0, 00:16:08.736 "w_mbytes_per_sec": 0 00:16:08.736 }, 00:16:08.736 "claimed": true, 00:16:08.736 "claim_type": "exclusive_write", 00:16:08.736 "zoned": false, 00:16:08.736 "supported_io_types": { 00:16:08.736 "read": true, 00:16:08.736 "write": true, 00:16:08.736 "unmap": true, 00:16:08.736 "flush": true, 00:16:08.736 "reset": true, 00:16:08.736 "nvme_admin": false, 00:16:08.736 "nvme_io": false, 00:16:08.736 "nvme_io_md": false, 00:16:08.736 "write_zeroes": true, 00:16:08.736 "zcopy": true, 00:16:08.736 "get_zone_info": false, 00:16:08.736 "zone_management": false, 00:16:08.736 "zone_append": false, 00:16:08.736 "compare": false, 00:16:08.736 "compare_and_write": false, 00:16:08.736 "abort": true, 00:16:08.736 "seek_hole": false, 00:16:08.736 "seek_data": false, 00:16:08.736 "copy": true, 00:16:08.736 "nvme_iov_md": false 00:16:08.736 }, 00:16:08.736 "memory_domains": [ 00:16:08.736 { 00:16:08.736 "dma_device_id": "system", 00:16:08.736 "dma_device_type": 1 00:16:08.736 }, 00:16:08.736 { 00:16:08.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.736 "dma_device_type": 2 00:16:08.736 } 00:16:08.736 ], 00:16:08.736 "driver_specific": {} 00:16:08.736 } 00:16:08.736 ] 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.736 "name": "Existed_Raid", 00:16:08.736 "uuid": "8d338ca4-9f09-4ac4-a645-2577e6f9e697", 00:16:08.736 "strip_size_kb": 64, 00:16:08.736 "state": "online", 00:16:08.736 "raid_level": "raid5f", 00:16:08.736 "superblock": false, 00:16:08.736 "num_base_bdevs": 4, 00:16:08.736 "num_base_bdevs_discovered": 4, 00:16:08.736 "num_base_bdevs_operational": 4, 00:16:08.736 "base_bdevs_list": [ 00:16:08.736 { 00:16:08.736 "name": "NewBaseBdev", 00:16:08.736 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:08.736 "is_configured": true, 00:16:08.736 "data_offset": 0, 00:16:08.736 "data_size": 65536 00:16:08.736 }, 00:16:08.736 { 00:16:08.736 "name": "BaseBdev2", 00:16:08.736 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:08.736 "is_configured": true, 00:16:08.736 "data_offset": 0, 00:16:08.736 "data_size": 65536 00:16:08.736 }, 00:16:08.736 { 00:16:08.736 "name": "BaseBdev3", 00:16:08.736 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:08.736 "is_configured": true, 00:16:08.736 "data_offset": 0, 00:16:08.736 "data_size": 65536 00:16:08.736 }, 00:16:08.736 { 00:16:08.736 "name": "BaseBdev4", 00:16:08.736 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:08.736 "is_configured": true, 00:16:08.736 "data_offset": 0, 00:16:08.736 "data_size": 65536 00:16:08.736 } 00:16:08.736 ] 00:16:08.736 }' 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.736 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:09.304 [2024-12-14 12:42:08.749161] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.304 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:09.304 "name": "Existed_Raid", 00:16:09.304 "aliases": [ 00:16:09.304 "8d338ca4-9f09-4ac4-a645-2577e6f9e697" 00:16:09.304 ], 00:16:09.304 "product_name": "Raid Volume", 00:16:09.304 "block_size": 512, 00:16:09.304 "num_blocks": 196608, 00:16:09.304 "uuid": "8d338ca4-9f09-4ac4-a645-2577e6f9e697", 00:16:09.304 "assigned_rate_limits": { 00:16:09.304 "rw_ios_per_sec": 0, 00:16:09.304 "rw_mbytes_per_sec": 0, 00:16:09.304 "r_mbytes_per_sec": 0, 00:16:09.304 "w_mbytes_per_sec": 0 00:16:09.304 }, 00:16:09.304 "claimed": false, 00:16:09.304 "zoned": false, 00:16:09.304 "supported_io_types": { 00:16:09.304 "read": true, 00:16:09.304 "write": true, 00:16:09.304 "unmap": false, 00:16:09.304 "flush": false, 00:16:09.304 "reset": true, 00:16:09.304 "nvme_admin": false, 00:16:09.304 "nvme_io": false, 00:16:09.304 "nvme_io_md": false, 00:16:09.304 "write_zeroes": true, 00:16:09.304 "zcopy": false, 00:16:09.304 "get_zone_info": false, 00:16:09.304 "zone_management": false, 00:16:09.304 "zone_append": false, 00:16:09.304 "compare": false, 00:16:09.304 "compare_and_write": false, 00:16:09.304 "abort": false, 00:16:09.304 "seek_hole": false, 00:16:09.304 "seek_data": false, 00:16:09.304 "copy": false, 00:16:09.304 "nvme_iov_md": false 00:16:09.304 }, 00:16:09.304 "driver_specific": { 00:16:09.304 "raid": { 00:16:09.304 "uuid": "8d338ca4-9f09-4ac4-a645-2577e6f9e697", 00:16:09.304 "strip_size_kb": 64, 00:16:09.304 "state": "online", 00:16:09.304 "raid_level": "raid5f", 00:16:09.304 "superblock": false, 00:16:09.304 "num_base_bdevs": 4, 00:16:09.304 "num_base_bdevs_discovered": 4, 00:16:09.304 "num_base_bdevs_operational": 4, 00:16:09.304 "base_bdevs_list": [ 00:16:09.304 { 00:16:09.304 "name": "NewBaseBdev", 00:16:09.304 "uuid": "6427c0f4-7598-4df2-b776-5f04e8cc9861", 00:16:09.304 "is_configured": true, 00:16:09.304 "data_offset": 0, 00:16:09.304 "data_size": 65536 00:16:09.304 }, 00:16:09.304 { 00:16:09.304 "name": "BaseBdev2", 00:16:09.304 "uuid": "0712fe43-1f76-4feb-afb7-4a8a6cbe7fba", 00:16:09.304 "is_configured": true, 00:16:09.304 "data_offset": 0, 00:16:09.304 "data_size": 65536 00:16:09.304 }, 00:16:09.304 { 00:16:09.304 "name": "BaseBdev3", 00:16:09.304 "uuid": "b4ec6c66-0a99-4b02-8ec2-7b4d9c4b2bf4", 00:16:09.304 "is_configured": true, 00:16:09.304 "data_offset": 0, 00:16:09.304 "data_size": 65536 00:16:09.304 }, 00:16:09.304 { 00:16:09.304 "name": "BaseBdev4", 00:16:09.305 "uuid": "3d0412e0-a73c-47ef-9080-13d2a7bf38ad", 00:16:09.305 "is_configured": true, 00:16:09.305 "data_offset": 0, 00:16:09.305 "data_size": 65536 00:16:09.305 } 00:16:09.305 ] 00:16:09.305 } 00:16:09.305 } 00:16:09.305 }' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:09.305 BaseBdev2 00:16:09.305 BaseBdev3 00:16:09.305 BaseBdev4' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.305 12:42:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.305 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.305 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.305 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.305 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:09.305 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:09.305 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:09.305 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.305 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.564 [2024-12-14 12:42:09.068365] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:09.564 [2024-12-14 12:42:09.068396] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:09.564 [2024-12-14 12:42:09.068481] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:09.564 [2024-12-14 12:42:09.068801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:09.564 [2024-12-14 12:42:09.068820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 84505 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 84505 ']' 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 84505 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84505 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:09.564 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:09.564 killing process with pid 84505 00:16:09.565 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84505' 00:16:09.565 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 84505 00:16:09.565 [2024-12-14 12:42:09.107105] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:09.565 12:42:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 84505 00:16:09.824 [2024-12-14 12:42:09.483935] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:11.204 00:16:11.204 real 0m11.381s 00:16:11.204 user 0m18.172s 00:16:11.204 sys 0m2.037s 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.204 ************************************ 00:16:11.204 END TEST raid5f_state_function_test 00:16:11.204 ************************************ 00:16:11.204 12:42:10 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:11.204 12:42:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:11.204 12:42:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:11.204 12:42:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:11.204 ************************************ 00:16:11.204 START TEST raid5f_state_function_test_sb 00:16:11.204 ************************************ 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=85177 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:11.204 Process raid pid: 85177 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85177' 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 85177 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85177 ']' 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:11.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:11.204 12:42:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.204 [2024-12-14 12:42:10.706448] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:11.204 [2024-12-14 12:42:10.706583] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:11.204 [2024-12-14 12:42:10.878182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.465 [2024-12-14 12:42:10.986757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.465 [2024-12-14 12:42:11.182787] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:11.465 [2024-12-14 12:42:11.182829] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.057 [2024-12-14 12:42:11.530363] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.057 [2024-12-14 12:42:11.530418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.057 [2024-12-14 12:42:11.530428] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.057 [2024-12-14 12:42:11.530438] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.057 [2024-12-14 12:42:11.530444] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.057 [2024-12-14 12:42:11.530453] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.057 [2024-12-14 12:42:11.530459] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:12.057 [2024-12-14 12:42:11.530467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.057 "name": "Existed_Raid", 00:16:12.057 "uuid": "3d533770-47b0-45f0-add9-0bf99a18e47f", 00:16:12.057 "strip_size_kb": 64, 00:16:12.057 "state": "configuring", 00:16:12.057 "raid_level": "raid5f", 00:16:12.057 "superblock": true, 00:16:12.057 "num_base_bdevs": 4, 00:16:12.057 "num_base_bdevs_discovered": 0, 00:16:12.057 "num_base_bdevs_operational": 4, 00:16:12.057 "base_bdevs_list": [ 00:16:12.057 { 00:16:12.057 "name": "BaseBdev1", 00:16:12.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.057 "is_configured": false, 00:16:12.057 "data_offset": 0, 00:16:12.057 "data_size": 0 00:16:12.057 }, 00:16:12.057 { 00:16:12.057 "name": "BaseBdev2", 00:16:12.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.057 "is_configured": false, 00:16:12.057 "data_offset": 0, 00:16:12.057 "data_size": 0 00:16:12.057 }, 00:16:12.057 { 00:16:12.057 "name": "BaseBdev3", 00:16:12.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.057 "is_configured": false, 00:16:12.057 "data_offset": 0, 00:16:12.057 "data_size": 0 00:16:12.057 }, 00:16:12.057 { 00:16:12.057 "name": "BaseBdev4", 00:16:12.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.057 "is_configured": false, 00:16:12.057 "data_offset": 0, 00:16:12.057 "data_size": 0 00:16:12.057 } 00:16:12.057 ] 00:16:12.057 }' 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.057 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.332 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.332 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.332 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.332 [2024-12-14 12:42:11.993506] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.332 [2024-12-14 12:42:11.993548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:12.332 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.332 12:42:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:12.332 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.332 12:42:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.332 [2024-12-14 12:42:12.005491] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:12.332 [2024-12-14 12:42:12.005534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:12.332 [2024-12-14 12:42:12.005543] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.332 [2024-12-14 12:42:12.005552] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.332 [2024-12-14 12:42:12.005557] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.332 [2024-12-14 12:42:12.005566] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.332 [2024-12-14 12:42:12.005571] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:12.332 [2024-12-14 12:42:12.005579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:12.332 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.332 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.333 [2024-12-14 12:42:12.052877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.333 BaseBdev1 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.333 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.593 [ 00:16:12.593 { 00:16:12.593 "name": "BaseBdev1", 00:16:12.593 "aliases": [ 00:16:12.593 "4a83dece-309b-4224-8d55-0c1ad14da8d6" 00:16:12.593 ], 00:16:12.593 "product_name": "Malloc disk", 00:16:12.593 "block_size": 512, 00:16:12.593 "num_blocks": 65536, 00:16:12.593 "uuid": "4a83dece-309b-4224-8d55-0c1ad14da8d6", 00:16:12.593 "assigned_rate_limits": { 00:16:12.593 "rw_ios_per_sec": 0, 00:16:12.593 "rw_mbytes_per_sec": 0, 00:16:12.593 "r_mbytes_per_sec": 0, 00:16:12.593 "w_mbytes_per_sec": 0 00:16:12.593 }, 00:16:12.593 "claimed": true, 00:16:12.593 "claim_type": "exclusive_write", 00:16:12.593 "zoned": false, 00:16:12.593 "supported_io_types": { 00:16:12.593 "read": true, 00:16:12.593 "write": true, 00:16:12.593 "unmap": true, 00:16:12.593 "flush": true, 00:16:12.593 "reset": true, 00:16:12.593 "nvme_admin": false, 00:16:12.593 "nvme_io": false, 00:16:12.593 "nvme_io_md": false, 00:16:12.593 "write_zeroes": true, 00:16:12.593 "zcopy": true, 00:16:12.593 "get_zone_info": false, 00:16:12.593 "zone_management": false, 00:16:12.593 "zone_append": false, 00:16:12.593 "compare": false, 00:16:12.593 "compare_and_write": false, 00:16:12.593 "abort": true, 00:16:12.593 "seek_hole": false, 00:16:12.593 "seek_data": false, 00:16:12.593 "copy": true, 00:16:12.593 "nvme_iov_md": false 00:16:12.593 }, 00:16:12.593 "memory_domains": [ 00:16:12.593 { 00:16:12.593 "dma_device_id": "system", 00:16:12.593 "dma_device_type": 1 00:16:12.593 }, 00:16:12.593 { 00:16:12.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.593 "dma_device_type": 2 00:16:12.593 } 00:16:12.593 ], 00:16:12.593 "driver_specific": {} 00:16:12.593 } 00:16:12.593 ] 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.593 "name": "Existed_Raid", 00:16:12.593 "uuid": "8883fe2f-0f39-4081-a495-5f59db475e84", 00:16:12.593 "strip_size_kb": 64, 00:16:12.593 "state": "configuring", 00:16:12.593 "raid_level": "raid5f", 00:16:12.593 "superblock": true, 00:16:12.593 "num_base_bdevs": 4, 00:16:12.593 "num_base_bdevs_discovered": 1, 00:16:12.593 "num_base_bdevs_operational": 4, 00:16:12.593 "base_bdevs_list": [ 00:16:12.593 { 00:16:12.593 "name": "BaseBdev1", 00:16:12.593 "uuid": "4a83dece-309b-4224-8d55-0c1ad14da8d6", 00:16:12.593 "is_configured": true, 00:16:12.593 "data_offset": 2048, 00:16:12.593 "data_size": 63488 00:16:12.593 }, 00:16:12.593 { 00:16:12.593 "name": "BaseBdev2", 00:16:12.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.593 "is_configured": false, 00:16:12.593 "data_offset": 0, 00:16:12.593 "data_size": 0 00:16:12.593 }, 00:16:12.593 { 00:16:12.593 "name": "BaseBdev3", 00:16:12.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.593 "is_configured": false, 00:16:12.593 "data_offset": 0, 00:16:12.593 "data_size": 0 00:16:12.593 }, 00:16:12.593 { 00:16:12.593 "name": "BaseBdev4", 00:16:12.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.593 "is_configured": false, 00:16:12.593 "data_offset": 0, 00:16:12.593 "data_size": 0 00:16:12.593 } 00:16:12.593 ] 00:16:12.593 }' 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.593 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.853 [2024-12-14 12:42:12.504139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:12.853 [2024-12-14 12:42:12.504196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.853 [2024-12-14 12:42:12.516212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:12.853 [2024-12-14 12:42:12.517985] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:12.853 [2024-12-14 12:42:12.518025] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:12.853 [2024-12-14 12:42:12.518035] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:12.853 [2024-12-14 12:42:12.518071] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:12.853 [2024-12-14 12:42:12.518078] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:12.853 [2024-12-14 12:42:12.518086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.853 "name": "Existed_Raid", 00:16:12.853 "uuid": "f20bf748-1787-4379-b6c7-88fff2199edf", 00:16:12.853 "strip_size_kb": 64, 00:16:12.853 "state": "configuring", 00:16:12.853 "raid_level": "raid5f", 00:16:12.853 "superblock": true, 00:16:12.853 "num_base_bdevs": 4, 00:16:12.853 "num_base_bdevs_discovered": 1, 00:16:12.853 "num_base_bdevs_operational": 4, 00:16:12.853 "base_bdevs_list": [ 00:16:12.853 { 00:16:12.853 "name": "BaseBdev1", 00:16:12.853 "uuid": "4a83dece-309b-4224-8d55-0c1ad14da8d6", 00:16:12.853 "is_configured": true, 00:16:12.853 "data_offset": 2048, 00:16:12.853 "data_size": 63488 00:16:12.853 }, 00:16:12.853 { 00:16:12.853 "name": "BaseBdev2", 00:16:12.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.853 "is_configured": false, 00:16:12.853 "data_offset": 0, 00:16:12.853 "data_size": 0 00:16:12.853 }, 00:16:12.853 { 00:16:12.853 "name": "BaseBdev3", 00:16:12.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.853 "is_configured": false, 00:16:12.853 "data_offset": 0, 00:16:12.853 "data_size": 0 00:16:12.853 }, 00:16:12.853 { 00:16:12.853 "name": "BaseBdev4", 00:16:12.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.853 "is_configured": false, 00:16:12.853 "data_offset": 0, 00:16:12.853 "data_size": 0 00:16:12.853 } 00:16:12.853 ] 00:16:12.853 }' 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.853 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.422 [2024-12-14 12:42:12.983017] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.422 BaseBdev2 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.422 12:42:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.422 [ 00:16:13.422 { 00:16:13.422 "name": "BaseBdev2", 00:16:13.422 "aliases": [ 00:16:13.422 "51cc19d1-3256-4337-9d7f-2cd318467dd4" 00:16:13.422 ], 00:16:13.422 "product_name": "Malloc disk", 00:16:13.422 "block_size": 512, 00:16:13.422 "num_blocks": 65536, 00:16:13.422 "uuid": "51cc19d1-3256-4337-9d7f-2cd318467dd4", 00:16:13.422 "assigned_rate_limits": { 00:16:13.422 "rw_ios_per_sec": 0, 00:16:13.422 "rw_mbytes_per_sec": 0, 00:16:13.422 "r_mbytes_per_sec": 0, 00:16:13.422 "w_mbytes_per_sec": 0 00:16:13.422 }, 00:16:13.422 "claimed": true, 00:16:13.422 "claim_type": "exclusive_write", 00:16:13.422 "zoned": false, 00:16:13.422 "supported_io_types": { 00:16:13.422 "read": true, 00:16:13.423 "write": true, 00:16:13.423 "unmap": true, 00:16:13.423 "flush": true, 00:16:13.423 "reset": true, 00:16:13.423 "nvme_admin": false, 00:16:13.423 "nvme_io": false, 00:16:13.423 "nvme_io_md": false, 00:16:13.423 "write_zeroes": true, 00:16:13.423 "zcopy": true, 00:16:13.423 "get_zone_info": false, 00:16:13.423 "zone_management": false, 00:16:13.423 "zone_append": false, 00:16:13.423 "compare": false, 00:16:13.423 "compare_and_write": false, 00:16:13.423 "abort": true, 00:16:13.423 "seek_hole": false, 00:16:13.423 "seek_data": false, 00:16:13.423 "copy": true, 00:16:13.423 "nvme_iov_md": false 00:16:13.423 }, 00:16:13.423 "memory_domains": [ 00:16:13.423 { 00:16:13.423 "dma_device_id": "system", 00:16:13.423 "dma_device_type": 1 00:16:13.423 }, 00:16:13.423 { 00:16:13.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.423 "dma_device_type": 2 00:16:13.423 } 00:16:13.423 ], 00:16:13.423 "driver_specific": {} 00:16:13.423 } 00:16:13.423 ] 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.423 "name": "Existed_Raid", 00:16:13.423 "uuid": "f20bf748-1787-4379-b6c7-88fff2199edf", 00:16:13.423 "strip_size_kb": 64, 00:16:13.423 "state": "configuring", 00:16:13.423 "raid_level": "raid5f", 00:16:13.423 "superblock": true, 00:16:13.423 "num_base_bdevs": 4, 00:16:13.423 "num_base_bdevs_discovered": 2, 00:16:13.423 "num_base_bdevs_operational": 4, 00:16:13.423 "base_bdevs_list": [ 00:16:13.423 { 00:16:13.423 "name": "BaseBdev1", 00:16:13.423 "uuid": "4a83dece-309b-4224-8d55-0c1ad14da8d6", 00:16:13.423 "is_configured": true, 00:16:13.423 "data_offset": 2048, 00:16:13.423 "data_size": 63488 00:16:13.423 }, 00:16:13.423 { 00:16:13.423 "name": "BaseBdev2", 00:16:13.423 "uuid": "51cc19d1-3256-4337-9d7f-2cd318467dd4", 00:16:13.423 "is_configured": true, 00:16:13.423 "data_offset": 2048, 00:16:13.423 "data_size": 63488 00:16:13.423 }, 00:16:13.423 { 00:16:13.423 "name": "BaseBdev3", 00:16:13.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.423 "is_configured": false, 00:16:13.423 "data_offset": 0, 00:16:13.423 "data_size": 0 00:16:13.423 }, 00:16:13.423 { 00:16:13.423 "name": "BaseBdev4", 00:16:13.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.423 "is_configured": false, 00:16:13.423 "data_offset": 0, 00:16:13.423 "data_size": 0 00:16:13.423 } 00:16:13.423 ] 00:16:13.423 }' 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.423 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.993 [2024-12-14 12:42:13.496651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.993 BaseBdev3 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.993 [ 00:16:13.993 { 00:16:13.993 "name": "BaseBdev3", 00:16:13.993 "aliases": [ 00:16:13.993 "07ebc2f5-018e-43cd-8fcd-2ae731b84a09" 00:16:13.993 ], 00:16:13.993 "product_name": "Malloc disk", 00:16:13.993 "block_size": 512, 00:16:13.993 "num_blocks": 65536, 00:16:13.993 "uuid": "07ebc2f5-018e-43cd-8fcd-2ae731b84a09", 00:16:13.993 "assigned_rate_limits": { 00:16:13.993 "rw_ios_per_sec": 0, 00:16:13.993 "rw_mbytes_per_sec": 0, 00:16:13.993 "r_mbytes_per_sec": 0, 00:16:13.993 "w_mbytes_per_sec": 0 00:16:13.993 }, 00:16:13.993 "claimed": true, 00:16:13.993 "claim_type": "exclusive_write", 00:16:13.993 "zoned": false, 00:16:13.993 "supported_io_types": { 00:16:13.993 "read": true, 00:16:13.993 "write": true, 00:16:13.993 "unmap": true, 00:16:13.993 "flush": true, 00:16:13.993 "reset": true, 00:16:13.993 "nvme_admin": false, 00:16:13.993 "nvme_io": false, 00:16:13.993 "nvme_io_md": false, 00:16:13.993 "write_zeroes": true, 00:16:13.993 "zcopy": true, 00:16:13.993 "get_zone_info": false, 00:16:13.993 "zone_management": false, 00:16:13.993 "zone_append": false, 00:16:13.993 "compare": false, 00:16:13.993 "compare_and_write": false, 00:16:13.993 "abort": true, 00:16:13.993 "seek_hole": false, 00:16:13.993 "seek_data": false, 00:16:13.993 "copy": true, 00:16:13.993 "nvme_iov_md": false 00:16:13.993 }, 00:16:13.993 "memory_domains": [ 00:16:13.993 { 00:16:13.993 "dma_device_id": "system", 00:16:13.993 "dma_device_type": 1 00:16:13.993 }, 00:16:13.993 { 00:16:13.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:13.993 "dma_device_type": 2 00:16:13.993 } 00:16:13.993 ], 00:16:13.993 "driver_specific": {} 00:16:13.993 } 00:16:13.993 ] 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.993 "name": "Existed_Raid", 00:16:13.993 "uuid": "f20bf748-1787-4379-b6c7-88fff2199edf", 00:16:13.993 "strip_size_kb": 64, 00:16:13.993 "state": "configuring", 00:16:13.993 "raid_level": "raid5f", 00:16:13.993 "superblock": true, 00:16:13.993 "num_base_bdevs": 4, 00:16:13.993 "num_base_bdevs_discovered": 3, 00:16:13.993 "num_base_bdevs_operational": 4, 00:16:13.993 "base_bdevs_list": [ 00:16:13.993 { 00:16:13.993 "name": "BaseBdev1", 00:16:13.993 "uuid": "4a83dece-309b-4224-8d55-0c1ad14da8d6", 00:16:13.993 "is_configured": true, 00:16:13.993 "data_offset": 2048, 00:16:13.993 "data_size": 63488 00:16:13.993 }, 00:16:13.993 { 00:16:13.993 "name": "BaseBdev2", 00:16:13.993 "uuid": "51cc19d1-3256-4337-9d7f-2cd318467dd4", 00:16:13.993 "is_configured": true, 00:16:13.993 "data_offset": 2048, 00:16:13.993 "data_size": 63488 00:16:13.993 }, 00:16:13.993 { 00:16:13.993 "name": "BaseBdev3", 00:16:13.993 "uuid": "07ebc2f5-018e-43cd-8fcd-2ae731b84a09", 00:16:13.993 "is_configured": true, 00:16:13.993 "data_offset": 2048, 00:16:13.993 "data_size": 63488 00:16:13.993 }, 00:16:13.993 { 00:16:13.993 "name": "BaseBdev4", 00:16:13.993 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.993 "is_configured": false, 00:16:13.993 "data_offset": 0, 00:16:13.993 "data_size": 0 00:16:13.993 } 00:16:13.993 ] 00:16:13.993 }' 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.993 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.563 12:42:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:14.563 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.564 12:42:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.564 [2024-12-14 12:42:14.026901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:14.564 [2024-12-14 12:42:14.027213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:14.564 [2024-12-14 12:42:14.027230] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:14.564 [2024-12-14 12:42:14.027490] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:14.564 BaseBdev4 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.564 [2024-12-14 12:42:14.034821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:14.564 [2024-12-14 12:42:14.034848] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:14.564 [2024-12-14 12:42:14.035116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.564 [ 00:16:14.564 { 00:16:14.564 "name": "BaseBdev4", 00:16:14.564 "aliases": [ 00:16:14.564 "1e00e4a0-87a0-4c6b-9447-ac69527d4785" 00:16:14.564 ], 00:16:14.564 "product_name": "Malloc disk", 00:16:14.564 "block_size": 512, 00:16:14.564 "num_blocks": 65536, 00:16:14.564 "uuid": "1e00e4a0-87a0-4c6b-9447-ac69527d4785", 00:16:14.564 "assigned_rate_limits": { 00:16:14.564 "rw_ios_per_sec": 0, 00:16:14.564 "rw_mbytes_per_sec": 0, 00:16:14.564 "r_mbytes_per_sec": 0, 00:16:14.564 "w_mbytes_per_sec": 0 00:16:14.564 }, 00:16:14.564 "claimed": true, 00:16:14.564 "claim_type": "exclusive_write", 00:16:14.564 "zoned": false, 00:16:14.564 "supported_io_types": { 00:16:14.564 "read": true, 00:16:14.564 "write": true, 00:16:14.564 "unmap": true, 00:16:14.564 "flush": true, 00:16:14.564 "reset": true, 00:16:14.564 "nvme_admin": false, 00:16:14.564 "nvme_io": false, 00:16:14.564 "nvme_io_md": false, 00:16:14.564 "write_zeroes": true, 00:16:14.564 "zcopy": true, 00:16:14.564 "get_zone_info": false, 00:16:14.564 "zone_management": false, 00:16:14.564 "zone_append": false, 00:16:14.564 "compare": false, 00:16:14.564 "compare_and_write": false, 00:16:14.564 "abort": true, 00:16:14.564 "seek_hole": false, 00:16:14.564 "seek_data": false, 00:16:14.564 "copy": true, 00:16:14.564 "nvme_iov_md": false 00:16:14.564 }, 00:16:14.564 "memory_domains": [ 00:16:14.564 { 00:16:14.564 "dma_device_id": "system", 00:16:14.564 "dma_device_type": 1 00:16:14.564 }, 00:16:14.564 { 00:16:14.564 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.564 "dma_device_type": 2 00:16:14.564 } 00:16:14.564 ], 00:16:14.564 "driver_specific": {} 00:16:14.564 } 00:16:14.564 ] 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.564 "name": "Existed_Raid", 00:16:14.564 "uuid": "f20bf748-1787-4379-b6c7-88fff2199edf", 00:16:14.564 "strip_size_kb": 64, 00:16:14.564 "state": "online", 00:16:14.564 "raid_level": "raid5f", 00:16:14.564 "superblock": true, 00:16:14.564 "num_base_bdevs": 4, 00:16:14.564 "num_base_bdevs_discovered": 4, 00:16:14.564 "num_base_bdevs_operational": 4, 00:16:14.564 "base_bdevs_list": [ 00:16:14.564 { 00:16:14.564 "name": "BaseBdev1", 00:16:14.564 "uuid": "4a83dece-309b-4224-8d55-0c1ad14da8d6", 00:16:14.564 "is_configured": true, 00:16:14.564 "data_offset": 2048, 00:16:14.564 "data_size": 63488 00:16:14.564 }, 00:16:14.564 { 00:16:14.564 "name": "BaseBdev2", 00:16:14.564 "uuid": "51cc19d1-3256-4337-9d7f-2cd318467dd4", 00:16:14.564 "is_configured": true, 00:16:14.564 "data_offset": 2048, 00:16:14.564 "data_size": 63488 00:16:14.564 }, 00:16:14.564 { 00:16:14.564 "name": "BaseBdev3", 00:16:14.564 "uuid": "07ebc2f5-018e-43cd-8fcd-2ae731b84a09", 00:16:14.564 "is_configured": true, 00:16:14.564 "data_offset": 2048, 00:16:14.564 "data_size": 63488 00:16:14.564 }, 00:16:14.564 { 00:16:14.564 "name": "BaseBdev4", 00:16:14.564 "uuid": "1e00e4a0-87a0-4c6b-9447-ac69527d4785", 00:16:14.564 "is_configured": true, 00:16:14.564 "data_offset": 2048, 00:16:14.564 "data_size": 63488 00:16:14.564 } 00:16:14.564 ] 00:16:14.564 }' 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.564 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.824 [2024-12-14 12:42:14.522832] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:14.824 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:15.084 "name": "Existed_Raid", 00:16:15.084 "aliases": [ 00:16:15.084 "f20bf748-1787-4379-b6c7-88fff2199edf" 00:16:15.084 ], 00:16:15.084 "product_name": "Raid Volume", 00:16:15.084 "block_size": 512, 00:16:15.084 "num_blocks": 190464, 00:16:15.084 "uuid": "f20bf748-1787-4379-b6c7-88fff2199edf", 00:16:15.084 "assigned_rate_limits": { 00:16:15.084 "rw_ios_per_sec": 0, 00:16:15.084 "rw_mbytes_per_sec": 0, 00:16:15.084 "r_mbytes_per_sec": 0, 00:16:15.084 "w_mbytes_per_sec": 0 00:16:15.084 }, 00:16:15.084 "claimed": false, 00:16:15.084 "zoned": false, 00:16:15.084 "supported_io_types": { 00:16:15.084 "read": true, 00:16:15.084 "write": true, 00:16:15.084 "unmap": false, 00:16:15.084 "flush": false, 00:16:15.084 "reset": true, 00:16:15.084 "nvme_admin": false, 00:16:15.084 "nvme_io": false, 00:16:15.084 "nvme_io_md": false, 00:16:15.084 "write_zeroes": true, 00:16:15.084 "zcopy": false, 00:16:15.084 "get_zone_info": false, 00:16:15.084 "zone_management": false, 00:16:15.084 "zone_append": false, 00:16:15.084 "compare": false, 00:16:15.084 "compare_and_write": false, 00:16:15.084 "abort": false, 00:16:15.084 "seek_hole": false, 00:16:15.084 "seek_data": false, 00:16:15.084 "copy": false, 00:16:15.084 "nvme_iov_md": false 00:16:15.084 }, 00:16:15.084 "driver_specific": { 00:16:15.084 "raid": { 00:16:15.084 "uuid": "f20bf748-1787-4379-b6c7-88fff2199edf", 00:16:15.084 "strip_size_kb": 64, 00:16:15.084 "state": "online", 00:16:15.084 "raid_level": "raid5f", 00:16:15.084 "superblock": true, 00:16:15.084 "num_base_bdevs": 4, 00:16:15.084 "num_base_bdevs_discovered": 4, 00:16:15.084 "num_base_bdevs_operational": 4, 00:16:15.084 "base_bdevs_list": [ 00:16:15.084 { 00:16:15.084 "name": "BaseBdev1", 00:16:15.084 "uuid": "4a83dece-309b-4224-8d55-0c1ad14da8d6", 00:16:15.084 "is_configured": true, 00:16:15.084 "data_offset": 2048, 00:16:15.084 "data_size": 63488 00:16:15.084 }, 00:16:15.084 { 00:16:15.084 "name": "BaseBdev2", 00:16:15.084 "uuid": "51cc19d1-3256-4337-9d7f-2cd318467dd4", 00:16:15.084 "is_configured": true, 00:16:15.084 "data_offset": 2048, 00:16:15.084 "data_size": 63488 00:16:15.084 }, 00:16:15.084 { 00:16:15.084 "name": "BaseBdev3", 00:16:15.084 "uuid": "07ebc2f5-018e-43cd-8fcd-2ae731b84a09", 00:16:15.084 "is_configured": true, 00:16:15.084 "data_offset": 2048, 00:16:15.084 "data_size": 63488 00:16:15.084 }, 00:16:15.084 { 00:16:15.084 "name": "BaseBdev4", 00:16:15.084 "uuid": "1e00e4a0-87a0-4c6b-9447-ac69527d4785", 00:16:15.084 "is_configured": true, 00:16:15.084 "data_offset": 2048, 00:16:15.084 "data_size": 63488 00:16:15.084 } 00:16:15.084 ] 00:16:15.084 } 00:16:15.084 } 00:16:15.084 }' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:15.084 BaseBdev2 00:16:15.084 BaseBdev3 00:16:15.084 BaseBdev4' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:15.084 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.345 [2024-12-14 12:42:14.842130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.345 "name": "Existed_Raid", 00:16:15.345 "uuid": "f20bf748-1787-4379-b6c7-88fff2199edf", 00:16:15.345 "strip_size_kb": 64, 00:16:15.345 "state": "online", 00:16:15.345 "raid_level": "raid5f", 00:16:15.345 "superblock": true, 00:16:15.345 "num_base_bdevs": 4, 00:16:15.345 "num_base_bdevs_discovered": 3, 00:16:15.345 "num_base_bdevs_operational": 3, 00:16:15.345 "base_bdevs_list": [ 00:16:15.345 { 00:16:15.345 "name": null, 00:16:15.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.345 "is_configured": false, 00:16:15.345 "data_offset": 0, 00:16:15.345 "data_size": 63488 00:16:15.345 }, 00:16:15.345 { 00:16:15.345 "name": "BaseBdev2", 00:16:15.345 "uuid": "51cc19d1-3256-4337-9d7f-2cd318467dd4", 00:16:15.345 "is_configured": true, 00:16:15.345 "data_offset": 2048, 00:16:15.345 "data_size": 63488 00:16:15.345 }, 00:16:15.345 { 00:16:15.345 "name": "BaseBdev3", 00:16:15.345 "uuid": "07ebc2f5-018e-43cd-8fcd-2ae731b84a09", 00:16:15.345 "is_configured": true, 00:16:15.345 "data_offset": 2048, 00:16:15.345 "data_size": 63488 00:16:15.345 }, 00:16:15.345 { 00:16:15.345 "name": "BaseBdev4", 00:16:15.345 "uuid": "1e00e4a0-87a0-4c6b-9447-ac69527d4785", 00:16:15.345 "is_configured": true, 00:16:15.345 "data_offset": 2048, 00:16:15.345 "data_size": 63488 00:16:15.345 } 00:16:15.345 ] 00:16:15.345 }' 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.345 12:42:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.915 [2024-12-14 12:42:15.426348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:15.915 [2024-12-14 12:42:15.426518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:15.915 [2024-12-14 12:42:15.515625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.915 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.915 [2024-12-14 12:42:15.571522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.175 [2024-12-14 12:42:15.722795] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:16.175 [2024-12-14 12:42:15.722866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.175 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.436 BaseBdev2 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.436 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.436 [ 00:16:16.436 { 00:16:16.436 "name": "BaseBdev2", 00:16:16.436 "aliases": [ 00:16:16.436 "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5" 00:16:16.436 ], 00:16:16.436 "product_name": "Malloc disk", 00:16:16.436 "block_size": 512, 00:16:16.436 "num_blocks": 65536, 00:16:16.436 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:16.436 "assigned_rate_limits": { 00:16:16.436 "rw_ios_per_sec": 0, 00:16:16.436 "rw_mbytes_per_sec": 0, 00:16:16.436 "r_mbytes_per_sec": 0, 00:16:16.436 "w_mbytes_per_sec": 0 00:16:16.436 }, 00:16:16.436 "claimed": false, 00:16:16.436 "zoned": false, 00:16:16.436 "supported_io_types": { 00:16:16.436 "read": true, 00:16:16.436 "write": true, 00:16:16.436 "unmap": true, 00:16:16.436 "flush": true, 00:16:16.436 "reset": true, 00:16:16.436 "nvme_admin": false, 00:16:16.436 "nvme_io": false, 00:16:16.436 "nvme_io_md": false, 00:16:16.436 "write_zeroes": true, 00:16:16.436 "zcopy": true, 00:16:16.436 "get_zone_info": false, 00:16:16.436 "zone_management": false, 00:16:16.436 "zone_append": false, 00:16:16.436 "compare": false, 00:16:16.436 "compare_and_write": false, 00:16:16.436 "abort": true, 00:16:16.436 "seek_hole": false, 00:16:16.436 "seek_data": false, 00:16:16.436 "copy": true, 00:16:16.436 "nvme_iov_md": false 00:16:16.436 }, 00:16:16.436 "memory_domains": [ 00:16:16.436 { 00:16:16.436 "dma_device_id": "system", 00:16:16.436 "dma_device_type": 1 00:16:16.436 }, 00:16:16.436 { 00:16:16.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.436 "dma_device_type": 2 00:16:16.436 } 00:16:16.436 ], 00:16:16.436 "driver_specific": {} 00:16:16.436 } 00:16:16.436 ] 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 BaseBdev3 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 12:42:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 [ 00:16:16.437 { 00:16:16.437 "name": "BaseBdev3", 00:16:16.437 "aliases": [ 00:16:16.437 "f510faa1-31f8-4ae4-855c-e71454449349" 00:16:16.437 ], 00:16:16.437 "product_name": "Malloc disk", 00:16:16.437 "block_size": 512, 00:16:16.437 "num_blocks": 65536, 00:16:16.437 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:16.437 "assigned_rate_limits": { 00:16:16.437 "rw_ios_per_sec": 0, 00:16:16.437 "rw_mbytes_per_sec": 0, 00:16:16.437 "r_mbytes_per_sec": 0, 00:16:16.437 "w_mbytes_per_sec": 0 00:16:16.437 }, 00:16:16.437 "claimed": false, 00:16:16.437 "zoned": false, 00:16:16.437 "supported_io_types": { 00:16:16.437 "read": true, 00:16:16.437 "write": true, 00:16:16.437 "unmap": true, 00:16:16.437 "flush": true, 00:16:16.437 "reset": true, 00:16:16.437 "nvme_admin": false, 00:16:16.437 "nvme_io": false, 00:16:16.437 "nvme_io_md": false, 00:16:16.437 "write_zeroes": true, 00:16:16.437 "zcopy": true, 00:16:16.437 "get_zone_info": false, 00:16:16.437 "zone_management": false, 00:16:16.437 "zone_append": false, 00:16:16.437 "compare": false, 00:16:16.437 "compare_and_write": false, 00:16:16.437 "abort": true, 00:16:16.437 "seek_hole": false, 00:16:16.437 "seek_data": false, 00:16:16.437 "copy": true, 00:16:16.437 "nvme_iov_md": false 00:16:16.437 }, 00:16:16.437 "memory_domains": [ 00:16:16.437 { 00:16:16.437 "dma_device_id": "system", 00:16:16.437 "dma_device_type": 1 00:16:16.437 }, 00:16:16.437 { 00:16:16.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.437 "dma_device_type": 2 00:16:16.437 } 00:16:16.437 ], 00:16:16.437 "driver_specific": {} 00:16:16.437 } 00:16:16.437 ] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 BaseBdev4 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 [ 00:16:16.437 { 00:16:16.437 "name": "BaseBdev4", 00:16:16.437 "aliases": [ 00:16:16.437 "5282e67b-fb03-406f-a226-809d95f18e07" 00:16:16.437 ], 00:16:16.437 "product_name": "Malloc disk", 00:16:16.437 "block_size": 512, 00:16:16.437 "num_blocks": 65536, 00:16:16.437 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:16.437 "assigned_rate_limits": { 00:16:16.437 "rw_ios_per_sec": 0, 00:16:16.437 "rw_mbytes_per_sec": 0, 00:16:16.437 "r_mbytes_per_sec": 0, 00:16:16.437 "w_mbytes_per_sec": 0 00:16:16.437 }, 00:16:16.437 "claimed": false, 00:16:16.437 "zoned": false, 00:16:16.437 "supported_io_types": { 00:16:16.437 "read": true, 00:16:16.437 "write": true, 00:16:16.437 "unmap": true, 00:16:16.437 "flush": true, 00:16:16.437 "reset": true, 00:16:16.437 "nvme_admin": false, 00:16:16.437 "nvme_io": false, 00:16:16.437 "nvme_io_md": false, 00:16:16.437 "write_zeroes": true, 00:16:16.437 "zcopy": true, 00:16:16.437 "get_zone_info": false, 00:16:16.437 "zone_management": false, 00:16:16.437 "zone_append": false, 00:16:16.437 "compare": false, 00:16:16.437 "compare_and_write": false, 00:16:16.437 "abort": true, 00:16:16.437 "seek_hole": false, 00:16:16.437 "seek_data": false, 00:16:16.437 "copy": true, 00:16:16.437 "nvme_iov_md": false 00:16:16.437 }, 00:16:16.437 "memory_domains": [ 00:16:16.437 { 00:16:16.437 "dma_device_id": "system", 00:16:16.437 "dma_device_type": 1 00:16:16.437 }, 00:16:16.437 { 00:16:16.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:16.437 "dma_device_type": 2 00:16:16.437 } 00:16:16.437 ], 00:16:16.437 "driver_specific": {} 00:16:16.437 } 00:16:16.437 ] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 [2024-12-14 12:42:16.104027] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:16.437 [2024-12-14 12:42:16.104096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:16.437 [2024-12-14 12:42:16.104118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:16.437 [2024-12-14 12:42:16.105883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:16.437 [2024-12-14 12:42:16.105954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.437 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.438 "name": "Existed_Raid", 00:16:16.438 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:16.438 "strip_size_kb": 64, 00:16:16.438 "state": "configuring", 00:16:16.438 "raid_level": "raid5f", 00:16:16.438 "superblock": true, 00:16:16.438 "num_base_bdevs": 4, 00:16:16.438 "num_base_bdevs_discovered": 3, 00:16:16.438 "num_base_bdevs_operational": 4, 00:16:16.438 "base_bdevs_list": [ 00:16:16.438 { 00:16:16.438 "name": "BaseBdev1", 00:16:16.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.438 "is_configured": false, 00:16:16.438 "data_offset": 0, 00:16:16.438 "data_size": 0 00:16:16.438 }, 00:16:16.438 { 00:16:16.438 "name": "BaseBdev2", 00:16:16.438 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:16.438 "is_configured": true, 00:16:16.438 "data_offset": 2048, 00:16:16.438 "data_size": 63488 00:16:16.438 }, 00:16:16.438 { 00:16:16.438 "name": "BaseBdev3", 00:16:16.438 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:16.438 "is_configured": true, 00:16:16.438 "data_offset": 2048, 00:16:16.438 "data_size": 63488 00:16:16.438 }, 00:16:16.438 { 00:16:16.438 "name": "BaseBdev4", 00:16:16.438 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:16.438 "is_configured": true, 00:16:16.438 "data_offset": 2048, 00:16:16.438 "data_size": 63488 00:16:16.438 } 00:16:16.438 ] 00:16:16.438 }' 00:16:16.438 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.438 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.007 [2024-12-14 12:42:16.559268] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.007 "name": "Existed_Raid", 00:16:17.007 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:17.007 "strip_size_kb": 64, 00:16:17.007 "state": "configuring", 00:16:17.007 "raid_level": "raid5f", 00:16:17.007 "superblock": true, 00:16:17.007 "num_base_bdevs": 4, 00:16:17.007 "num_base_bdevs_discovered": 2, 00:16:17.007 "num_base_bdevs_operational": 4, 00:16:17.007 "base_bdevs_list": [ 00:16:17.007 { 00:16:17.007 "name": "BaseBdev1", 00:16:17.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.007 "is_configured": false, 00:16:17.007 "data_offset": 0, 00:16:17.007 "data_size": 0 00:16:17.007 }, 00:16:17.007 { 00:16:17.007 "name": null, 00:16:17.007 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:17.007 "is_configured": false, 00:16:17.007 "data_offset": 0, 00:16:17.007 "data_size": 63488 00:16:17.007 }, 00:16:17.007 { 00:16:17.007 "name": "BaseBdev3", 00:16:17.007 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:17.007 "is_configured": true, 00:16:17.007 "data_offset": 2048, 00:16:17.007 "data_size": 63488 00:16:17.007 }, 00:16:17.007 { 00:16:17.007 "name": "BaseBdev4", 00:16:17.007 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:17.007 "is_configured": true, 00:16:17.007 "data_offset": 2048, 00:16:17.007 "data_size": 63488 00:16:17.007 } 00:16:17.007 ] 00:16:17.007 }' 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.007 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.268 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.268 12:42:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:17.268 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.268 12:42:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.527 [2024-12-14 12:42:17.078095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.527 BaseBdev1 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.527 [ 00:16:17.527 { 00:16:17.527 "name": "BaseBdev1", 00:16:17.527 "aliases": [ 00:16:17.527 "b5268a20-59d7-44c4-9eb1-5fc3bce2df50" 00:16:17.527 ], 00:16:17.527 "product_name": "Malloc disk", 00:16:17.527 "block_size": 512, 00:16:17.527 "num_blocks": 65536, 00:16:17.527 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:17.527 "assigned_rate_limits": { 00:16:17.527 "rw_ios_per_sec": 0, 00:16:17.527 "rw_mbytes_per_sec": 0, 00:16:17.527 "r_mbytes_per_sec": 0, 00:16:17.527 "w_mbytes_per_sec": 0 00:16:17.527 }, 00:16:17.527 "claimed": true, 00:16:17.527 "claim_type": "exclusive_write", 00:16:17.527 "zoned": false, 00:16:17.527 "supported_io_types": { 00:16:17.527 "read": true, 00:16:17.527 "write": true, 00:16:17.527 "unmap": true, 00:16:17.527 "flush": true, 00:16:17.527 "reset": true, 00:16:17.527 "nvme_admin": false, 00:16:17.527 "nvme_io": false, 00:16:17.527 "nvme_io_md": false, 00:16:17.527 "write_zeroes": true, 00:16:17.527 "zcopy": true, 00:16:17.527 "get_zone_info": false, 00:16:17.527 "zone_management": false, 00:16:17.527 "zone_append": false, 00:16:17.527 "compare": false, 00:16:17.527 "compare_and_write": false, 00:16:17.527 "abort": true, 00:16:17.527 "seek_hole": false, 00:16:17.527 "seek_data": false, 00:16:17.527 "copy": true, 00:16:17.527 "nvme_iov_md": false 00:16:17.527 }, 00:16:17.527 "memory_domains": [ 00:16:17.527 { 00:16:17.527 "dma_device_id": "system", 00:16:17.527 "dma_device_type": 1 00:16:17.527 }, 00:16:17.527 { 00:16:17.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.527 "dma_device_type": 2 00:16:17.527 } 00:16:17.527 ], 00:16:17.527 "driver_specific": {} 00:16:17.527 } 00:16:17.527 ] 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.527 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.527 "name": "Existed_Raid", 00:16:17.527 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:17.527 "strip_size_kb": 64, 00:16:17.527 "state": "configuring", 00:16:17.527 "raid_level": "raid5f", 00:16:17.527 "superblock": true, 00:16:17.528 "num_base_bdevs": 4, 00:16:17.528 "num_base_bdevs_discovered": 3, 00:16:17.528 "num_base_bdevs_operational": 4, 00:16:17.528 "base_bdevs_list": [ 00:16:17.528 { 00:16:17.528 "name": "BaseBdev1", 00:16:17.528 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:17.528 "is_configured": true, 00:16:17.528 "data_offset": 2048, 00:16:17.528 "data_size": 63488 00:16:17.528 }, 00:16:17.528 { 00:16:17.528 "name": null, 00:16:17.528 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:17.528 "is_configured": false, 00:16:17.528 "data_offset": 0, 00:16:17.528 "data_size": 63488 00:16:17.528 }, 00:16:17.528 { 00:16:17.528 "name": "BaseBdev3", 00:16:17.528 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:17.528 "is_configured": true, 00:16:17.528 "data_offset": 2048, 00:16:17.528 "data_size": 63488 00:16:17.528 }, 00:16:17.528 { 00:16:17.528 "name": "BaseBdev4", 00:16:17.528 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:17.528 "is_configured": true, 00:16:17.528 "data_offset": 2048, 00:16:17.528 "data_size": 63488 00:16:17.528 } 00:16:17.528 ] 00:16:17.528 }' 00:16:17.528 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.528 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 [2024-12-14 12:42:17.589287] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.098 "name": "Existed_Raid", 00:16:18.098 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:18.098 "strip_size_kb": 64, 00:16:18.098 "state": "configuring", 00:16:18.098 "raid_level": "raid5f", 00:16:18.098 "superblock": true, 00:16:18.098 "num_base_bdevs": 4, 00:16:18.098 "num_base_bdevs_discovered": 2, 00:16:18.098 "num_base_bdevs_operational": 4, 00:16:18.098 "base_bdevs_list": [ 00:16:18.098 { 00:16:18.098 "name": "BaseBdev1", 00:16:18.098 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:18.098 "is_configured": true, 00:16:18.098 "data_offset": 2048, 00:16:18.098 "data_size": 63488 00:16:18.098 }, 00:16:18.098 { 00:16:18.098 "name": null, 00:16:18.098 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:18.098 "is_configured": false, 00:16:18.098 "data_offset": 0, 00:16:18.098 "data_size": 63488 00:16:18.098 }, 00:16:18.098 { 00:16:18.098 "name": null, 00:16:18.098 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:18.098 "is_configured": false, 00:16:18.098 "data_offset": 0, 00:16:18.098 "data_size": 63488 00:16:18.098 }, 00:16:18.098 { 00:16:18.098 "name": "BaseBdev4", 00:16:18.098 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:18.098 "is_configured": true, 00:16:18.098 "data_offset": 2048, 00:16:18.098 "data_size": 63488 00:16:18.098 } 00:16:18.098 ] 00:16:18.098 }' 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.098 12:42:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.358 [2024-12-14 12:42:18.056444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.358 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.617 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.617 "name": "Existed_Raid", 00:16:18.617 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:18.617 "strip_size_kb": 64, 00:16:18.617 "state": "configuring", 00:16:18.617 "raid_level": "raid5f", 00:16:18.617 "superblock": true, 00:16:18.617 "num_base_bdevs": 4, 00:16:18.617 "num_base_bdevs_discovered": 3, 00:16:18.617 "num_base_bdevs_operational": 4, 00:16:18.617 "base_bdevs_list": [ 00:16:18.617 { 00:16:18.617 "name": "BaseBdev1", 00:16:18.617 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 2048, 00:16:18.617 "data_size": 63488 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": null, 00:16:18.617 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:18.617 "is_configured": false, 00:16:18.617 "data_offset": 0, 00:16:18.617 "data_size": 63488 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": "BaseBdev3", 00:16:18.617 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 2048, 00:16:18.617 "data_size": 63488 00:16:18.617 }, 00:16:18.617 { 00:16:18.617 "name": "BaseBdev4", 00:16:18.617 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:18.617 "is_configured": true, 00:16:18.617 "data_offset": 2048, 00:16:18.617 "data_size": 63488 00:16:18.617 } 00:16:18.617 ] 00:16:18.617 }' 00:16:18.617 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.617 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.877 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.877 [2024-12-14 12:42:18.531665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.136 "name": "Existed_Raid", 00:16:19.136 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:19.136 "strip_size_kb": 64, 00:16:19.136 "state": "configuring", 00:16:19.136 "raid_level": "raid5f", 00:16:19.136 "superblock": true, 00:16:19.136 "num_base_bdevs": 4, 00:16:19.136 "num_base_bdevs_discovered": 2, 00:16:19.136 "num_base_bdevs_operational": 4, 00:16:19.136 "base_bdevs_list": [ 00:16:19.136 { 00:16:19.136 "name": null, 00:16:19.136 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:19.136 "is_configured": false, 00:16:19.136 "data_offset": 0, 00:16:19.136 "data_size": 63488 00:16:19.136 }, 00:16:19.136 { 00:16:19.136 "name": null, 00:16:19.136 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:19.136 "is_configured": false, 00:16:19.136 "data_offset": 0, 00:16:19.136 "data_size": 63488 00:16:19.136 }, 00:16:19.136 { 00:16:19.136 "name": "BaseBdev3", 00:16:19.136 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:19.136 "is_configured": true, 00:16:19.136 "data_offset": 2048, 00:16:19.136 "data_size": 63488 00:16:19.136 }, 00:16:19.136 { 00:16:19.136 "name": "BaseBdev4", 00:16:19.136 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:19.136 "is_configured": true, 00:16:19.136 "data_offset": 2048, 00:16:19.136 "data_size": 63488 00:16:19.136 } 00:16:19.136 ] 00:16:19.136 }' 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.136 12:42:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.395 [2024-12-14 12:42:19.110141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.395 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.655 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.655 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.655 "name": "Existed_Raid", 00:16:19.655 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:19.655 "strip_size_kb": 64, 00:16:19.655 "state": "configuring", 00:16:19.655 "raid_level": "raid5f", 00:16:19.655 "superblock": true, 00:16:19.655 "num_base_bdevs": 4, 00:16:19.655 "num_base_bdevs_discovered": 3, 00:16:19.655 "num_base_bdevs_operational": 4, 00:16:19.655 "base_bdevs_list": [ 00:16:19.655 { 00:16:19.655 "name": null, 00:16:19.655 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:19.655 "is_configured": false, 00:16:19.655 "data_offset": 0, 00:16:19.655 "data_size": 63488 00:16:19.655 }, 00:16:19.655 { 00:16:19.655 "name": "BaseBdev2", 00:16:19.655 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:19.655 "is_configured": true, 00:16:19.655 "data_offset": 2048, 00:16:19.655 "data_size": 63488 00:16:19.655 }, 00:16:19.655 { 00:16:19.655 "name": "BaseBdev3", 00:16:19.655 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:19.655 "is_configured": true, 00:16:19.655 "data_offset": 2048, 00:16:19.655 "data_size": 63488 00:16:19.655 }, 00:16:19.655 { 00:16:19.655 "name": "BaseBdev4", 00:16:19.655 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:19.655 "is_configured": true, 00:16:19.655 "data_offset": 2048, 00:16:19.655 "data_size": 63488 00:16:19.655 } 00:16:19.655 ] 00:16:19.655 }' 00:16:19.655 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.655 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b5268a20-59d7-44c4-9eb1-5fc3bce2df50 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.915 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.176 [2024-12-14 12:42:19.653737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:20.176 [2024-12-14 12:42:19.653985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:20.176 [2024-12-14 12:42:19.653997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:20.176 [2024-12-14 12:42:19.654266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:20.176 NewBaseBdev 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.176 [2024-12-14 12:42:19.661786] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:20.176 [2024-12-14 12:42:19.661814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:20.176 [2024-12-14 12:42:19.662083] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.176 [ 00:16:20.176 { 00:16:20.176 "name": "NewBaseBdev", 00:16:20.176 "aliases": [ 00:16:20.176 "b5268a20-59d7-44c4-9eb1-5fc3bce2df50" 00:16:20.176 ], 00:16:20.176 "product_name": "Malloc disk", 00:16:20.176 "block_size": 512, 00:16:20.176 "num_blocks": 65536, 00:16:20.176 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:20.176 "assigned_rate_limits": { 00:16:20.176 "rw_ios_per_sec": 0, 00:16:20.176 "rw_mbytes_per_sec": 0, 00:16:20.176 "r_mbytes_per_sec": 0, 00:16:20.176 "w_mbytes_per_sec": 0 00:16:20.176 }, 00:16:20.176 "claimed": true, 00:16:20.176 "claim_type": "exclusive_write", 00:16:20.176 "zoned": false, 00:16:20.176 "supported_io_types": { 00:16:20.176 "read": true, 00:16:20.176 "write": true, 00:16:20.176 "unmap": true, 00:16:20.176 "flush": true, 00:16:20.176 "reset": true, 00:16:20.176 "nvme_admin": false, 00:16:20.176 "nvme_io": false, 00:16:20.176 "nvme_io_md": false, 00:16:20.176 "write_zeroes": true, 00:16:20.176 "zcopy": true, 00:16:20.176 "get_zone_info": false, 00:16:20.176 "zone_management": false, 00:16:20.176 "zone_append": false, 00:16:20.176 "compare": false, 00:16:20.176 "compare_and_write": false, 00:16:20.176 "abort": true, 00:16:20.176 "seek_hole": false, 00:16:20.176 "seek_data": false, 00:16:20.176 "copy": true, 00:16:20.176 "nvme_iov_md": false 00:16:20.176 }, 00:16:20.176 "memory_domains": [ 00:16:20.176 { 00:16:20.176 "dma_device_id": "system", 00:16:20.176 "dma_device_type": 1 00:16:20.176 }, 00:16:20.176 { 00:16:20.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:20.176 "dma_device_type": 2 00:16:20.176 } 00:16:20.176 ], 00:16:20.176 "driver_specific": {} 00:16:20.176 } 00:16:20.176 ] 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.176 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.176 "name": "Existed_Raid", 00:16:20.176 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:20.176 "strip_size_kb": 64, 00:16:20.176 "state": "online", 00:16:20.176 "raid_level": "raid5f", 00:16:20.176 "superblock": true, 00:16:20.176 "num_base_bdevs": 4, 00:16:20.176 "num_base_bdevs_discovered": 4, 00:16:20.176 "num_base_bdevs_operational": 4, 00:16:20.176 "base_bdevs_list": [ 00:16:20.176 { 00:16:20.176 "name": "NewBaseBdev", 00:16:20.176 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:20.176 "is_configured": true, 00:16:20.176 "data_offset": 2048, 00:16:20.176 "data_size": 63488 00:16:20.176 }, 00:16:20.176 { 00:16:20.176 "name": "BaseBdev2", 00:16:20.176 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:20.176 "is_configured": true, 00:16:20.176 "data_offset": 2048, 00:16:20.176 "data_size": 63488 00:16:20.176 }, 00:16:20.176 { 00:16:20.177 "name": "BaseBdev3", 00:16:20.177 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:20.177 "is_configured": true, 00:16:20.177 "data_offset": 2048, 00:16:20.177 "data_size": 63488 00:16:20.177 }, 00:16:20.177 { 00:16:20.177 "name": "BaseBdev4", 00:16:20.177 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:20.177 "is_configured": true, 00:16:20.177 "data_offset": 2048, 00:16:20.177 "data_size": 63488 00:16:20.177 } 00:16:20.177 ] 00:16:20.177 }' 00:16:20.177 12:42:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.177 12:42:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.437 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:20.437 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:20.437 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:20.437 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:20.437 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:20.437 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.698 [2024-12-14 12:42:20.181747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:20.698 "name": "Existed_Raid", 00:16:20.698 "aliases": [ 00:16:20.698 "2fab89b6-6ff7-4c91-a610-7e18580b7684" 00:16:20.698 ], 00:16:20.698 "product_name": "Raid Volume", 00:16:20.698 "block_size": 512, 00:16:20.698 "num_blocks": 190464, 00:16:20.698 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:20.698 "assigned_rate_limits": { 00:16:20.698 "rw_ios_per_sec": 0, 00:16:20.698 "rw_mbytes_per_sec": 0, 00:16:20.698 "r_mbytes_per_sec": 0, 00:16:20.698 "w_mbytes_per_sec": 0 00:16:20.698 }, 00:16:20.698 "claimed": false, 00:16:20.698 "zoned": false, 00:16:20.698 "supported_io_types": { 00:16:20.698 "read": true, 00:16:20.698 "write": true, 00:16:20.698 "unmap": false, 00:16:20.698 "flush": false, 00:16:20.698 "reset": true, 00:16:20.698 "nvme_admin": false, 00:16:20.698 "nvme_io": false, 00:16:20.698 "nvme_io_md": false, 00:16:20.698 "write_zeroes": true, 00:16:20.698 "zcopy": false, 00:16:20.698 "get_zone_info": false, 00:16:20.698 "zone_management": false, 00:16:20.698 "zone_append": false, 00:16:20.698 "compare": false, 00:16:20.698 "compare_and_write": false, 00:16:20.698 "abort": false, 00:16:20.698 "seek_hole": false, 00:16:20.698 "seek_data": false, 00:16:20.698 "copy": false, 00:16:20.698 "nvme_iov_md": false 00:16:20.698 }, 00:16:20.698 "driver_specific": { 00:16:20.698 "raid": { 00:16:20.698 "uuid": "2fab89b6-6ff7-4c91-a610-7e18580b7684", 00:16:20.698 "strip_size_kb": 64, 00:16:20.698 "state": "online", 00:16:20.698 "raid_level": "raid5f", 00:16:20.698 "superblock": true, 00:16:20.698 "num_base_bdevs": 4, 00:16:20.698 "num_base_bdevs_discovered": 4, 00:16:20.698 "num_base_bdevs_operational": 4, 00:16:20.698 "base_bdevs_list": [ 00:16:20.698 { 00:16:20.698 "name": "NewBaseBdev", 00:16:20.698 "uuid": "b5268a20-59d7-44c4-9eb1-5fc3bce2df50", 00:16:20.698 "is_configured": true, 00:16:20.698 "data_offset": 2048, 00:16:20.698 "data_size": 63488 00:16:20.698 }, 00:16:20.698 { 00:16:20.698 "name": "BaseBdev2", 00:16:20.698 "uuid": "51fb8e3d-b71b-4a5e-a6d5-14acaff5d7a5", 00:16:20.698 "is_configured": true, 00:16:20.698 "data_offset": 2048, 00:16:20.698 "data_size": 63488 00:16:20.698 }, 00:16:20.698 { 00:16:20.698 "name": "BaseBdev3", 00:16:20.698 "uuid": "f510faa1-31f8-4ae4-855c-e71454449349", 00:16:20.698 "is_configured": true, 00:16:20.698 "data_offset": 2048, 00:16:20.698 "data_size": 63488 00:16:20.698 }, 00:16:20.698 { 00:16:20.698 "name": "BaseBdev4", 00:16:20.698 "uuid": "5282e67b-fb03-406f-a226-809d95f18e07", 00:16:20.698 "is_configured": true, 00:16:20.698 "data_offset": 2048, 00:16:20.698 "data_size": 63488 00:16:20.698 } 00:16:20.698 ] 00:16:20.698 } 00:16:20.698 } 00:16:20.698 }' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:20.698 BaseBdev2 00:16:20.698 BaseBdev3 00:16:20.698 BaseBdev4' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.698 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.959 [2024-12-14 12:42:20.532865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:20.959 [2024-12-14 12:42:20.532898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:20.959 [2024-12-14 12:42:20.532975] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:20.959 [2024-12-14 12:42:20.533297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:20.959 [2024-12-14 12:42:20.533316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 85177 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85177 ']' 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 85177 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85177 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.959 killing process with pid 85177 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85177' 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 85177 00:16:20.959 [2024-12-14 12:42:20.572877] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:20.959 12:42:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 85177 00:16:21.529 [2024-12-14 12:42:20.957448] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:22.468 12:42:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:22.468 00:16:22.468 real 0m11.430s 00:16:22.468 user 0m18.257s 00:16:22.468 sys 0m2.101s 00:16:22.468 12:42:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:22.468 12:42:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 ************************************ 00:16:22.468 END TEST raid5f_state_function_test_sb 00:16:22.468 ************************************ 00:16:22.468 12:42:22 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:22.468 12:42:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:22.468 12:42:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:22.468 12:42:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 ************************************ 00:16:22.468 START TEST raid5f_superblock_test 00:16:22.468 ************************************ 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=85851 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 85851 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 85851 ']' 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.468 12:42:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.468 [2024-12-14 12:42:22.198496] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:22.469 [2024-12-14 12:42:22.198636] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85851 ] 00:16:22.729 [2024-12-14 12:42:22.357669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.729 [2024-12-14 12:42:22.464822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.988 [2024-12-14 12:42:22.654145] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:22.988 [2024-12-14 12:42:22.654207] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 malloc1 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 [2024-12-14 12:42:23.073937] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:23.558 [2024-12-14 12:42:23.073993] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.558 [2024-12-14 12:42:23.074031] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:23.558 [2024-12-14 12:42:23.074039] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.558 [2024-12-14 12:42:23.076258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.558 [2024-12-14 12:42:23.076292] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:23.558 pt1 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 malloc2 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 [2024-12-14 12:42:23.127757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.558 [2024-12-14 12:42:23.127861] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.558 [2024-12-14 12:42:23.127919] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:23.558 [2024-12-14 12:42:23.127975] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.558 [2024-12-14 12:42:23.129961] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.558 [2024-12-14 12:42:23.130026] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.558 pt2 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 malloc3 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 [2024-12-14 12:42:23.200516] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:23.558 [2024-12-14 12:42:23.200615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.558 [2024-12-14 12:42:23.200652] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:23.558 [2024-12-14 12:42:23.200679] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.558 [2024-12-14 12:42:23.202699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.558 [2024-12-14 12:42:23.202767] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:23.558 pt3 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 malloc4 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.558 [2024-12-14 12:42:23.257961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:23.558 [2024-12-14 12:42:23.258021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.558 [2024-12-14 12:42:23.258054] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:23.558 [2024-12-14 12:42:23.258063] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.558 [2024-12-14 12:42:23.260141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.558 [2024-12-14 12:42:23.260211] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:23.558 pt4 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.558 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.559 [2024-12-14 12:42:23.269965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:23.559 [2024-12-14 12:42:23.271734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.559 [2024-12-14 12:42:23.271874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:23.559 [2024-12-14 12:42:23.271932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:23.559 [2024-12-14 12:42:23.272128] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:23.559 [2024-12-14 12:42:23.272145] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:23.559 [2024-12-14 12:42:23.272381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:23.559 [2024-12-14 12:42:23.279690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:23.559 [2024-12-14 12:42:23.279759] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:23.559 [2024-12-14 12:42:23.279983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.559 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.819 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.819 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:23.819 "name": "raid_bdev1", 00:16:23.819 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:23.819 "strip_size_kb": 64, 00:16:23.819 "state": "online", 00:16:23.819 "raid_level": "raid5f", 00:16:23.819 "superblock": true, 00:16:23.819 "num_base_bdevs": 4, 00:16:23.820 "num_base_bdevs_discovered": 4, 00:16:23.820 "num_base_bdevs_operational": 4, 00:16:23.820 "base_bdevs_list": [ 00:16:23.820 { 00:16:23.820 "name": "pt1", 00:16:23.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:23.820 "is_configured": true, 00:16:23.820 "data_offset": 2048, 00:16:23.820 "data_size": 63488 00:16:23.820 }, 00:16:23.820 { 00:16:23.820 "name": "pt2", 00:16:23.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:23.820 "is_configured": true, 00:16:23.820 "data_offset": 2048, 00:16:23.820 "data_size": 63488 00:16:23.820 }, 00:16:23.820 { 00:16:23.820 "name": "pt3", 00:16:23.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:23.820 "is_configured": true, 00:16:23.820 "data_offset": 2048, 00:16:23.820 "data_size": 63488 00:16:23.820 }, 00:16:23.820 { 00:16:23.820 "name": "pt4", 00:16:23.820 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:23.820 "is_configured": true, 00:16:23.820 "data_offset": 2048, 00:16:23.820 "data_size": 63488 00:16:23.820 } 00:16:23.820 ] 00:16:23.820 }' 00:16:23.820 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:23.820 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.080 [2024-12-14 12:42:23.755789] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:24.080 "name": "raid_bdev1", 00:16:24.080 "aliases": [ 00:16:24.080 "e124f456-12e3-4f9f-a09e-e2186d02e87a" 00:16:24.080 ], 00:16:24.080 "product_name": "Raid Volume", 00:16:24.080 "block_size": 512, 00:16:24.080 "num_blocks": 190464, 00:16:24.080 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:24.080 "assigned_rate_limits": { 00:16:24.080 "rw_ios_per_sec": 0, 00:16:24.080 "rw_mbytes_per_sec": 0, 00:16:24.080 "r_mbytes_per_sec": 0, 00:16:24.080 "w_mbytes_per_sec": 0 00:16:24.080 }, 00:16:24.080 "claimed": false, 00:16:24.080 "zoned": false, 00:16:24.080 "supported_io_types": { 00:16:24.080 "read": true, 00:16:24.080 "write": true, 00:16:24.080 "unmap": false, 00:16:24.080 "flush": false, 00:16:24.080 "reset": true, 00:16:24.080 "nvme_admin": false, 00:16:24.080 "nvme_io": false, 00:16:24.080 "nvme_io_md": false, 00:16:24.080 "write_zeroes": true, 00:16:24.080 "zcopy": false, 00:16:24.080 "get_zone_info": false, 00:16:24.080 "zone_management": false, 00:16:24.080 "zone_append": false, 00:16:24.080 "compare": false, 00:16:24.080 "compare_and_write": false, 00:16:24.080 "abort": false, 00:16:24.080 "seek_hole": false, 00:16:24.080 "seek_data": false, 00:16:24.080 "copy": false, 00:16:24.080 "nvme_iov_md": false 00:16:24.080 }, 00:16:24.080 "driver_specific": { 00:16:24.080 "raid": { 00:16:24.080 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:24.080 "strip_size_kb": 64, 00:16:24.080 "state": "online", 00:16:24.080 "raid_level": "raid5f", 00:16:24.080 "superblock": true, 00:16:24.080 "num_base_bdevs": 4, 00:16:24.080 "num_base_bdevs_discovered": 4, 00:16:24.080 "num_base_bdevs_operational": 4, 00:16:24.080 "base_bdevs_list": [ 00:16:24.080 { 00:16:24.080 "name": "pt1", 00:16:24.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.080 "is_configured": true, 00:16:24.080 "data_offset": 2048, 00:16:24.080 "data_size": 63488 00:16:24.080 }, 00:16:24.080 { 00:16:24.080 "name": "pt2", 00:16:24.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.080 "is_configured": true, 00:16:24.080 "data_offset": 2048, 00:16:24.080 "data_size": 63488 00:16:24.080 }, 00:16:24.080 { 00:16:24.080 "name": "pt3", 00:16:24.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.080 "is_configured": true, 00:16:24.080 "data_offset": 2048, 00:16:24.080 "data_size": 63488 00:16:24.080 }, 00:16:24.080 { 00:16:24.080 "name": "pt4", 00:16:24.080 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.080 "is_configured": true, 00:16:24.080 "data_offset": 2048, 00:16:24.080 "data_size": 63488 00:16:24.080 } 00:16:24.080 ] 00:16:24.080 } 00:16:24.080 } 00:16:24.080 }' 00:16:24.080 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.340 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:24.340 pt2 00:16:24.340 pt3 00:16:24.340 pt4' 00:16:24.340 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.340 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:24.340 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.340 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.341 12:42:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:24.341 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.600 [2024-12-14 12:42:24.079261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e124f456-12e3-4f9f-a09e-e2186d02e87a 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e124f456-12e3-4f9f-a09e-e2186d02e87a ']' 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.600 [2024-12-14 12:42:24.122929] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.600 [2024-12-14 12:42:24.123002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.600 [2024-12-14 12:42:24.123145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.600 [2024-12-14 12:42:24.123276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.600 [2024-12-14 12:42:24.123323] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.600 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.601 [2024-12-14 12:42:24.282664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:24.601 [2024-12-14 12:42:24.284408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:24.601 [2024-12-14 12:42:24.284457] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:24.601 [2024-12-14 12:42:24.284489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:24.601 [2024-12-14 12:42:24.284539] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:24.601 [2024-12-14 12:42:24.284585] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:24.601 [2024-12-14 12:42:24.284604] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:24.601 [2024-12-14 12:42:24.284622] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:24.601 [2024-12-14 12:42:24.284635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:24.601 [2024-12-14 12:42:24.284646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:24.601 request: 00:16:24.601 { 00:16:24.601 "name": "raid_bdev1", 00:16:24.601 "raid_level": "raid5f", 00:16:24.601 "base_bdevs": [ 00:16:24.601 "malloc1", 00:16:24.601 "malloc2", 00:16:24.601 "malloc3", 00:16:24.601 "malloc4" 00:16:24.601 ], 00:16:24.601 "strip_size_kb": 64, 00:16:24.601 "superblock": false, 00:16:24.601 "method": "bdev_raid_create", 00:16:24.601 "req_id": 1 00:16:24.601 } 00:16:24.601 Got JSON-RPC error response 00:16:24.601 response: 00:16:24.601 { 00:16:24.601 "code": -17, 00:16:24.601 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:24.601 } 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:24.601 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.861 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:24.861 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:24.861 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:24.861 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.861 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.861 [2024-12-14 12:42:24.350519] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:24.861 [2024-12-14 12:42:24.350621] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.861 [2024-12-14 12:42:24.350654] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:24.861 [2024-12-14 12:42:24.350683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.861 [2024-12-14 12:42:24.352755] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.861 [2024-12-14 12:42:24.352826] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:24.861 [2024-12-14 12:42:24.352919] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:24.862 [2024-12-14 12:42:24.353007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:24.862 pt1 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.862 "name": "raid_bdev1", 00:16:24.862 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:24.862 "strip_size_kb": 64, 00:16:24.862 "state": "configuring", 00:16:24.862 "raid_level": "raid5f", 00:16:24.862 "superblock": true, 00:16:24.862 "num_base_bdevs": 4, 00:16:24.862 "num_base_bdevs_discovered": 1, 00:16:24.862 "num_base_bdevs_operational": 4, 00:16:24.862 "base_bdevs_list": [ 00:16:24.862 { 00:16:24.862 "name": "pt1", 00:16:24.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.862 "is_configured": true, 00:16:24.862 "data_offset": 2048, 00:16:24.862 "data_size": 63488 00:16:24.862 }, 00:16:24.862 { 00:16:24.862 "name": null, 00:16:24.862 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.862 "is_configured": false, 00:16:24.862 "data_offset": 2048, 00:16:24.862 "data_size": 63488 00:16:24.862 }, 00:16:24.862 { 00:16:24.862 "name": null, 00:16:24.862 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.862 "is_configured": false, 00:16:24.862 "data_offset": 2048, 00:16:24.862 "data_size": 63488 00:16:24.862 }, 00:16:24.862 { 00:16:24.862 "name": null, 00:16:24.862 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.862 "is_configured": false, 00:16:24.862 "data_offset": 2048, 00:16:24.862 "data_size": 63488 00:16:24.862 } 00:16:24.862 ] 00:16:24.862 }' 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.862 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.123 [2024-12-14 12:42:24.829751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:25.123 [2024-12-14 12:42:24.829833] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.123 [2024-12-14 12:42:24.829853] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:25.123 [2024-12-14 12:42:24.829863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.123 [2024-12-14 12:42:24.830321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.123 [2024-12-14 12:42:24.830354] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:25.123 [2024-12-14 12:42:24.830446] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:25.123 [2024-12-14 12:42:24.830470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:25.123 pt2 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.123 [2024-12-14 12:42:24.841723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.123 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.393 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.393 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.393 "name": "raid_bdev1", 00:16:25.393 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:25.393 "strip_size_kb": 64, 00:16:25.393 "state": "configuring", 00:16:25.393 "raid_level": "raid5f", 00:16:25.393 "superblock": true, 00:16:25.393 "num_base_bdevs": 4, 00:16:25.393 "num_base_bdevs_discovered": 1, 00:16:25.393 "num_base_bdevs_operational": 4, 00:16:25.393 "base_bdevs_list": [ 00:16:25.393 { 00:16:25.393 "name": "pt1", 00:16:25.393 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.393 "is_configured": true, 00:16:25.393 "data_offset": 2048, 00:16:25.393 "data_size": 63488 00:16:25.393 }, 00:16:25.393 { 00:16:25.393 "name": null, 00:16:25.393 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.393 "is_configured": false, 00:16:25.393 "data_offset": 0, 00:16:25.393 "data_size": 63488 00:16:25.393 }, 00:16:25.393 { 00:16:25.393 "name": null, 00:16:25.393 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.393 "is_configured": false, 00:16:25.393 "data_offset": 2048, 00:16:25.393 "data_size": 63488 00:16:25.393 }, 00:16:25.393 { 00:16:25.393 "name": null, 00:16:25.393 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.393 "is_configured": false, 00:16:25.393 "data_offset": 2048, 00:16:25.393 "data_size": 63488 00:16:25.393 } 00:16:25.393 ] 00:16:25.393 }' 00:16:25.393 12:42:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.393 12:42:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.671 [2024-12-14 12:42:25.312923] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:25.671 [2024-12-14 12:42:25.312995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.671 [2024-12-14 12:42:25.313015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:25.671 [2024-12-14 12:42:25.313024] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.671 [2024-12-14 12:42:25.313540] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.671 [2024-12-14 12:42:25.313566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:25.671 [2024-12-14 12:42:25.313660] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:25.671 [2024-12-14 12:42:25.313682] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:25.671 pt2 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.671 [2024-12-14 12:42:25.324888] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:25.671 [2024-12-14 12:42:25.324981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.671 [2024-12-14 12:42:25.325019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:25.671 [2024-12-14 12:42:25.325028] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.671 [2024-12-14 12:42:25.325420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.671 [2024-12-14 12:42:25.325438] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:25.671 [2024-12-14 12:42:25.325505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:25.671 [2024-12-14 12:42:25.325529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:25.671 pt3 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.671 [2024-12-14 12:42:25.336835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:25.671 [2024-12-14 12:42:25.336882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.671 [2024-12-14 12:42:25.336917] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:25.671 [2024-12-14 12:42:25.336924] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.671 [2024-12-14 12:42:25.337312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.671 [2024-12-14 12:42:25.337328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:25.671 [2024-12-14 12:42:25.337396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:25.671 [2024-12-14 12:42:25.337417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:25.671 [2024-12-14 12:42:25.337584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:25.671 [2024-12-14 12:42:25.337598] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:25.671 [2024-12-14 12:42:25.337838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:25.671 [2024-12-14 12:42:25.345027] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:25.671 [2024-12-14 12:42:25.345063] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:25.671 [2024-12-14 12:42:25.345254] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.671 pt4 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.671 "name": "raid_bdev1", 00:16:25.671 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:25.671 "strip_size_kb": 64, 00:16:25.671 "state": "online", 00:16:25.671 "raid_level": "raid5f", 00:16:25.671 "superblock": true, 00:16:25.671 "num_base_bdevs": 4, 00:16:25.671 "num_base_bdevs_discovered": 4, 00:16:25.671 "num_base_bdevs_operational": 4, 00:16:25.671 "base_bdevs_list": [ 00:16:25.671 { 00:16:25.671 "name": "pt1", 00:16:25.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.671 "is_configured": true, 00:16:25.671 "data_offset": 2048, 00:16:25.671 "data_size": 63488 00:16:25.671 }, 00:16:25.671 { 00:16:25.671 "name": "pt2", 00:16:25.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.671 "is_configured": true, 00:16:25.671 "data_offset": 2048, 00:16:25.671 "data_size": 63488 00:16:25.671 }, 00:16:25.671 { 00:16:25.671 "name": "pt3", 00:16:25.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.671 "is_configured": true, 00:16:25.671 "data_offset": 2048, 00:16:25.671 "data_size": 63488 00:16:25.671 }, 00:16:25.671 { 00:16:25.671 "name": "pt4", 00:16:25.671 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.671 "is_configured": true, 00:16:25.671 "data_offset": 2048, 00:16:25.671 "data_size": 63488 00:16:25.671 } 00:16:25.671 ] 00:16:25.671 }' 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.671 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.238 [2024-12-14 12:42:25.809216] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.238 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:26.238 "name": "raid_bdev1", 00:16:26.238 "aliases": [ 00:16:26.238 "e124f456-12e3-4f9f-a09e-e2186d02e87a" 00:16:26.238 ], 00:16:26.238 "product_name": "Raid Volume", 00:16:26.238 "block_size": 512, 00:16:26.238 "num_blocks": 190464, 00:16:26.238 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:26.238 "assigned_rate_limits": { 00:16:26.238 "rw_ios_per_sec": 0, 00:16:26.238 "rw_mbytes_per_sec": 0, 00:16:26.238 "r_mbytes_per_sec": 0, 00:16:26.238 "w_mbytes_per_sec": 0 00:16:26.238 }, 00:16:26.238 "claimed": false, 00:16:26.238 "zoned": false, 00:16:26.238 "supported_io_types": { 00:16:26.238 "read": true, 00:16:26.238 "write": true, 00:16:26.238 "unmap": false, 00:16:26.238 "flush": false, 00:16:26.238 "reset": true, 00:16:26.238 "nvme_admin": false, 00:16:26.238 "nvme_io": false, 00:16:26.238 "nvme_io_md": false, 00:16:26.238 "write_zeroes": true, 00:16:26.238 "zcopy": false, 00:16:26.238 "get_zone_info": false, 00:16:26.239 "zone_management": false, 00:16:26.239 "zone_append": false, 00:16:26.239 "compare": false, 00:16:26.239 "compare_and_write": false, 00:16:26.239 "abort": false, 00:16:26.239 "seek_hole": false, 00:16:26.239 "seek_data": false, 00:16:26.239 "copy": false, 00:16:26.239 "nvme_iov_md": false 00:16:26.239 }, 00:16:26.239 "driver_specific": { 00:16:26.239 "raid": { 00:16:26.239 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:26.239 "strip_size_kb": 64, 00:16:26.239 "state": "online", 00:16:26.239 "raid_level": "raid5f", 00:16:26.239 "superblock": true, 00:16:26.239 "num_base_bdevs": 4, 00:16:26.239 "num_base_bdevs_discovered": 4, 00:16:26.239 "num_base_bdevs_operational": 4, 00:16:26.239 "base_bdevs_list": [ 00:16:26.239 { 00:16:26.239 "name": "pt1", 00:16:26.239 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.239 "is_configured": true, 00:16:26.239 "data_offset": 2048, 00:16:26.239 "data_size": 63488 00:16:26.239 }, 00:16:26.239 { 00:16:26.239 "name": "pt2", 00:16:26.239 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.239 "is_configured": true, 00:16:26.239 "data_offset": 2048, 00:16:26.239 "data_size": 63488 00:16:26.239 }, 00:16:26.239 { 00:16:26.239 "name": "pt3", 00:16:26.239 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.239 "is_configured": true, 00:16:26.239 "data_offset": 2048, 00:16:26.239 "data_size": 63488 00:16:26.239 }, 00:16:26.239 { 00:16:26.239 "name": "pt4", 00:16:26.239 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.239 "is_configured": true, 00:16:26.239 "data_offset": 2048, 00:16:26.239 "data_size": 63488 00:16:26.239 } 00:16:26.239 ] 00:16:26.239 } 00:16:26.239 } 00:16:26.239 }' 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:26.239 pt2 00:16:26.239 pt3 00:16:26.239 pt4' 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.239 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.498 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.498 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.498 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.498 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.498 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:26.498 12:42:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.498 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.498 12:42:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 [2024-12-14 12:42:26.160514] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e124f456-12e3-4f9f-a09e-e2186d02e87a '!=' e124f456-12e3-4f9f-a09e-e2186d02e87a ']' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 [2024-12-14 12:42:26.188335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.499 "name": "raid_bdev1", 00:16:26.499 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:26.499 "strip_size_kb": 64, 00:16:26.499 "state": "online", 00:16:26.499 "raid_level": "raid5f", 00:16:26.499 "superblock": true, 00:16:26.499 "num_base_bdevs": 4, 00:16:26.499 "num_base_bdevs_discovered": 3, 00:16:26.499 "num_base_bdevs_operational": 3, 00:16:26.499 "base_bdevs_list": [ 00:16:26.499 { 00:16:26.499 "name": null, 00:16:26.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.499 "is_configured": false, 00:16:26.499 "data_offset": 0, 00:16:26.499 "data_size": 63488 00:16:26.499 }, 00:16:26.499 { 00:16:26.499 "name": "pt2", 00:16:26.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.499 "is_configured": true, 00:16:26.499 "data_offset": 2048, 00:16:26.499 "data_size": 63488 00:16:26.499 }, 00:16:26.499 { 00:16:26.499 "name": "pt3", 00:16:26.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.499 "is_configured": true, 00:16:26.499 "data_offset": 2048, 00:16:26.499 "data_size": 63488 00:16:26.499 }, 00:16:26.499 { 00:16:26.499 "name": "pt4", 00:16:26.499 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.499 "is_configured": true, 00:16:26.499 "data_offset": 2048, 00:16:26.499 "data_size": 63488 00:16:26.499 } 00:16:26.499 ] 00:16:26.499 }' 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.499 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 [2024-12-14 12:42:26.643586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.070 [2024-12-14 12:42:26.643620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.070 [2024-12-14 12:42:26.643703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.070 [2024-12-14 12:42:26.643791] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.070 [2024-12-14 12:42:26.643801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 [2024-12-14 12:42:26.739364] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:27.070 [2024-12-14 12:42:26.739413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.070 [2024-12-14 12:42:26.739430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:27.070 [2024-12-14 12:42:26.739438] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.070 [2024-12-14 12:42:26.741593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.070 [2024-12-14 12:42:26.741628] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:27.070 [2024-12-14 12:42:26.741704] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:27.070 [2024-12-14 12:42:26.741744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.070 pt2 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.070 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.070 "name": "raid_bdev1", 00:16:27.070 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:27.070 "strip_size_kb": 64, 00:16:27.070 "state": "configuring", 00:16:27.070 "raid_level": "raid5f", 00:16:27.070 "superblock": true, 00:16:27.070 "num_base_bdevs": 4, 00:16:27.070 "num_base_bdevs_discovered": 1, 00:16:27.070 "num_base_bdevs_operational": 3, 00:16:27.070 "base_bdevs_list": [ 00:16:27.070 { 00:16:27.070 "name": null, 00:16:27.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.070 "is_configured": false, 00:16:27.071 "data_offset": 2048, 00:16:27.071 "data_size": 63488 00:16:27.071 }, 00:16:27.071 { 00:16:27.071 "name": "pt2", 00:16:27.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.071 "is_configured": true, 00:16:27.071 "data_offset": 2048, 00:16:27.071 "data_size": 63488 00:16:27.071 }, 00:16:27.071 { 00:16:27.071 "name": null, 00:16:27.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.071 "is_configured": false, 00:16:27.071 "data_offset": 2048, 00:16:27.071 "data_size": 63488 00:16:27.071 }, 00:16:27.071 { 00:16:27.071 "name": null, 00:16:27.071 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.071 "is_configured": false, 00:16:27.071 "data_offset": 2048, 00:16:27.071 "data_size": 63488 00:16:27.071 } 00:16:27.071 ] 00:16:27.071 }' 00:16:27.071 12:42:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.071 12:42:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 [2024-12-14 12:42:27.182668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:27.640 [2024-12-14 12:42:27.182820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.640 [2024-12-14 12:42:27.182866] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:27.640 [2024-12-14 12:42:27.182898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.640 [2024-12-14 12:42:27.183377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.640 [2024-12-14 12:42:27.183436] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:27.640 [2024-12-14 12:42:27.183556] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:27.640 [2024-12-14 12:42:27.183608] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:27.640 pt3 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.640 "name": "raid_bdev1", 00:16:27.640 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:27.640 "strip_size_kb": 64, 00:16:27.640 "state": "configuring", 00:16:27.640 "raid_level": "raid5f", 00:16:27.640 "superblock": true, 00:16:27.640 "num_base_bdevs": 4, 00:16:27.640 "num_base_bdevs_discovered": 2, 00:16:27.640 "num_base_bdevs_operational": 3, 00:16:27.640 "base_bdevs_list": [ 00:16:27.640 { 00:16:27.640 "name": null, 00:16:27.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.640 "is_configured": false, 00:16:27.640 "data_offset": 2048, 00:16:27.640 "data_size": 63488 00:16:27.640 }, 00:16:27.640 { 00:16:27.640 "name": "pt2", 00:16:27.640 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.640 "is_configured": true, 00:16:27.640 "data_offset": 2048, 00:16:27.640 "data_size": 63488 00:16:27.640 }, 00:16:27.640 { 00:16:27.640 "name": "pt3", 00:16:27.640 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.640 "is_configured": true, 00:16:27.640 "data_offset": 2048, 00:16:27.640 "data_size": 63488 00:16:27.640 }, 00:16:27.640 { 00:16:27.640 "name": null, 00:16:27.640 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.640 "is_configured": false, 00:16:27.640 "data_offset": 2048, 00:16:27.640 "data_size": 63488 00:16:27.640 } 00:16:27.640 ] 00:16:27.640 }' 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.640 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.901 [2024-12-14 12:42:27.617960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:27.901 [2024-12-14 12:42:27.618033] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.901 [2024-12-14 12:42:27.618087] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:27.901 [2024-12-14 12:42:27.618096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.901 [2024-12-14 12:42:27.618619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.901 [2024-12-14 12:42:27.618645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:27.901 [2024-12-14 12:42:27.618739] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:27.901 [2024-12-14 12:42:27.618768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:27.901 [2024-12-14 12:42:27.618919] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:27.901 [2024-12-14 12:42:27.618928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:27.901 [2024-12-14 12:42:27.619179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:27.901 [2024-12-14 12:42:27.625872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:27.901 pt4 00:16:27.901 [2024-12-14 12:42:27.625942] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:27.901 [2024-12-14 12:42:27.626281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.901 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.161 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.161 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.161 "name": "raid_bdev1", 00:16:28.161 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:28.161 "strip_size_kb": 64, 00:16:28.161 "state": "online", 00:16:28.161 "raid_level": "raid5f", 00:16:28.161 "superblock": true, 00:16:28.161 "num_base_bdevs": 4, 00:16:28.161 "num_base_bdevs_discovered": 3, 00:16:28.161 "num_base_bdevs_operational": 3, 00:16:28.161 "base_bdevs_list": [ 00:16:28.161 { 00:16:28.161 "name": null, 00:16:28.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.161 "is_configured": false, 00:16:28.161 "data_offset": 2048, 00:16:28.161 "data_size": 63488 00:16:28.161 }, 00:16:28.161 { 00:16:28.161 "name": "pt2", 00:16:28.161 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.161 "is_configured": true, 00:16:28.161 "data_offset": 2048, 00:16:28.161 "data_size": 63488 00:16:28.161 }, 00:16:28.161 { 00:16:28.161 "name": "pt3", 00:16:28.161 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.161 "is_configured": true, 00:16:28.161 "data_offset": 2048, 00:16:28.161 "data_size": 63488 00:16:28.161 }, 00:16:28.161 { 00:16:28.161 "name": "pt4", 00:16:28.161 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.161 "is_configured": true, 00:16:28.161 "data_offset": 2048, 00:16:28.161 "data_size": 63488 00:16:28.161 } 00:16:28.161 ] 00:16:28.161 }' 00:16:28.161 12:42:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.161 12:42:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.422 [2024-12-14 12:42:28.050710] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:28.422 [2024-12-14 12:42:28.050802] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.422 [2024-12-14 12:42:28.050910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.422 [2024-12-14 12:42:28.051028] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.422 [2024-12-14 12:42:28.051105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.422 [2024-12-14 12:42:28.126619] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:28.422 [2024-12-14 12:42:28.126695] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.422 [2024-12-14 12:42:28.126725] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:28.422 [2024-12-14 12:42:28.126736] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.422 [2024-12-14 12:42:28.129075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.422 [2024-12-14 12:42:28.129111] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:28.422 [2024-12-14 12:42:28.129204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:28.422 [2024-12-14 12:42:28.129251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:28.422 [2024-12-14 12:42:28.129430] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:28.422 [2024-12-14 12:42:28.129451] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:28.422 [2024-12-14 12:42:28.129468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:28.422 [2024-12-14 12:42:28.129545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.422 [2024-12-14 12:42:28.129640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.422 pt1 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.422 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.682 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.682 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.682 "name": "raid_bdev1", 00:16:28.682 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:28.682 "strip_size_kb": 64, 00:16:28.682 "state": "configuring", 00:16:28.682 "raid_level": "raid5f", 00:16:28.682 "superblock": true, 00:16:28.682 "num_base_bdevs": 4, 00:16:28.682 "num_base_bdevs_discovered": 2, 00:16:28.682 "num_base_bdevs_operational": 3, 00:16:28.682 "base_bdevs_list": [ 00:16:28.682 { 00:16:28.682 "name": null, 00:16:28.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.682 "is_configured": false, 00:16:28.682 "data_offset": 2048, 00:16:28.682 "data_size": 63488 00:16:28.682 }, 00:16:28.682 { 00:16:28.682 "name": "pt2", 00:16:28.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.682 "is_configured": true, 00:16:28.682 "data_offset": 2048, 00:16:28.682 "data_size": 63488 00:16:28.682 }, 00:16:28.682 { 00:16:28.682 "name": "pt3", 00:16:28.682 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.682 "is_configured": true, 00:16:28.682 "data_offset": 2048, 00:16:28.682 "data_size": 63488 00:16:28.682 }, 00:16:28.682 { 00:16:28.682 "name": null, 00:16:28.682 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.682 "is_configured": false, 00:16:28.682 "data_offset": 2048, 00:16:28.682 "data_size": 63488 00:16:28.682 } 00:16:28.682 ] 00:16:28.682 }' 00:16:28.682 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.682 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.943 [2024-12-14 12:42:28.561860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:28.943 [2024-12-14 12:42:28.561929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.943 [2024-12-14 12:42:28.561952] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:28.943 [2024-12-14 12:42:28.561961] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.943 [2024-12-14 12:42:28.562431] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.943 [2024-12-14 12:42:28.562450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:28.943 [2024-12-14 12:42:28.562546] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:28.943 [2024-12-14 12:42:28.562587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:28.943 [2024-12-14 12:42:28.562739] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:28.943 [2024-12-14 12:42:28.562749] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:28.943 [2024-12-14 12:42:28.563042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:28.943 [2024-12-14 12:42:28.570409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:28.943 [2024-12-14 12:42:28.570435] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:28.943 [2024-12-14 12:42:28.570729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.943 pt4 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.943 "name": "raid_bdev1", 00:16:28.943 "uuid": "e124f456-12e3-4f9f-a09e-e2186d02e87a", 00:16:28.943 "strip_size_kb": 64, 00:16:28.943 "state": "online", 00:16:28.943 "raid_level": "raid5f", 00:16:28.943 "superblock": true, 00:16:28.943 "num_base_bdevs": 4, 00:16:28.943 "num_base_bdevs_discovered": 3, 00:16:28.943 "num_base_bdevs_operational": 3, 00:16:28.943 "base_bdevs_list": [ 00:16:28.943 { 00:16:28.943 "name": null, 00:16:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.943 "is_configured": false, 00:16:28.943 "data_offset": 2048, 00:16:28.943 "data_size": 63488 00:16:28.943 }, 00:16:28.943 { 00:16:28.943 "name": "pt2", 00:16:28.943 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.943 "is_configured": true, 00:16:28.943 "data_offset": 2048, 00:16:28.943 "data_size": 63488 00:16:28.943 }, 00:16:28.943 { 00:16:28.943 "name": "pt3", 00:16:28.943 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.943 "is_configured": true, 00:16:28.943 "data_offset": 2048, 00:16:28.943 "data_size": 63488 00:16:28.943 }, 00:16:28.943 { 00:16:28.943 "name": "pt4", 00:16:28.943 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.943 "is_configured": true, 00:16:28.943 "data_offset": 2048, 00:16:28.943 "data_size": 63488 00:16:28.943 } 00:16:28.943 ] 00:16:28.943 }' 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.943 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.513 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:29.513 12:42:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:29.513 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.513 12:42:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.513 [2024-12-14 12:42:29.047313] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e124f456-12e3-4f9f-a09e-e2186d02e87a '!=' e124f456-12e3-4f9f-a09e-e2186d02e87a ']' 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 85851 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 85851 ']' 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 85851 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85851 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.513 killing process with pid 85851 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85851' 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 85851 00:16:29.513 [2024-12-14 12:42:29.130951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:29.513 [2024-12-14 12:42:29.131058] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.513 [2024-12-14 12:42:29.131150] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.513 [2024-12-14 12:42:29.131168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:29.513 12:42:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 85851 00:16:30.083 [2024-12-14 12:42:29.510819] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.023 12:42:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:31.023 00:16:31.023 real 0m8.491s 00:16:31.023 user 0m13.393s 00:16:31.023 sys 0m1.545s 00:16:31.023 12:42:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.023 12:42:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.023 ************************************ 00:16:31.023 END TEST raid5f_superblock_test 00:16:31.023 ************************************ 00:16:31.023 12:42:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:31.023 12:42:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:31.023 12:42:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:31.023 12:42:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.023 12:42:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.023 ************************************ 00:16:31.023 START TEST raid5f_rebuild_test 00:16:31.023 ************************************ 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=86337 00:16:31.023 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:31.024 12:42:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 86337 00:16:31.024 12:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 86337 ']' 00:16:31.024 12:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.024 12:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.024 12:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.024 12:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.024 12:42:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.284 [2024-12-14 12:42:30.768907] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:31.284 [2024-12-14 12:42:30.769124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86337 ] 00:16:31.284 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:31.284 Zero copy mechanism will not be used. 00:16:31.284 [2024-12-14 12:42:30.941882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.544 [2024-12-14 12:42:31.052306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.544 [2024-12-14 12:42:31.242733] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.544 [2024-12-14 12:42:31.242878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.114 BaseBdev1_malloc 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.114 [2024-12-14 12:42:31.638679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:32.114 [2024-12-14 12:42:31.638741] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.114 [2024-12-14 12:42:31.638763] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:32.114 [2024-12-14 12:42:31.638774] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.114 [2024-12-14 12:42:31.640824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.114 [2024-12-14 12:42:31.640867] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:32.114 BaseBdev1 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.114 BaseBdev2_malloc 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.114 [2024-12-14 12:42:31.691329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:32.114 [2024-12-14 12:42:31.691442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.114 [2024-12-14 12:42:31.691464] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:32.114 [2024-12-14 12:42:31.691477] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.114 [2024-12-14 12:42:31.693524] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.114 [2024-12-14 12:42:31.693563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:32.114 BaseBdev2 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.114 BaseBdev3_malloc 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.114 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.115 [2024-12-14 12:42:31.753844] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:32.115 [2024-12-14 12:42:31.753955] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.115 [2024-12-14 12:42:31.753997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:32.115 [2024-12-14 12:42:31.754009] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.115 [2024-12-14 12:42:31.756145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.115 [2024-12-14 12:42:31.756182] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:32.115 BaseBdev3 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.115 BaseBdev4_malloc 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.115 [2024-12-14 12:42:31.805941] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:32.115 [2024-12-14 12:42:31.806035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.115 [2024-12-14 12:42:31.806073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:32.115 [2024-12-14 12:42:31.806086] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.115 [2024-12-14 12:42:31.808382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.115 [2024-12-14 12:42:31.808428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:32.115 BaseBdev4 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.115 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.375 spare_malloc 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.375 spare_delay 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.375 [2024-12-14 12:42:31.870383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.375 [2024-12-14 12:42:31.870438] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.375 [2024-12-14 12:42:31.870470] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:32.375 [2024-12-14 12:42:31.870481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.375 [2024-12-14 12:42:31.872607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.375 [2024-12-14 12:42:31.872645] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.375 spare 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.375 [2024-12-14 12:42:31.882427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.375 [2024-12-14 12:42:31.884382] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.375 [2024-12-14 12:42:31.884444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.375 [2024-12-14 12:42:31.884493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.375 [2024-12-14 12:42:31.884591] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:32.375 [2024-12-14 12:42:31.884606] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:32.375 [2024-12-14 12:42:31.884862] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:32.375 [2024-12-14 12:42:31.891645] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:32.375 [2024-12-14 12:42:31.891667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:32.375 [2024-12-14 12:42:31.891919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.375 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.375 "name": "raid_bdev1", 00:16:32.375 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:32.375 "strip_size_kb": 64, 00:16:32.375 "state": "online", 00:16:32.375 "raid_level": "raid5f", 00:16:32.375 "superblock": false, 00:16:32.375 "num_base_bdevs": 4, 00:16:32.375 "num_base_bdevs_discovered": 4, 00:16:32.375 "num_base_bdevs_operational": 4, 00:16:32.375 "base_bdevs_list": [ 00:16:32.375 { 00:16:32.375 "name": "BaseBdev1", 00:16:32.375 "uuid": "3a9a419b-26a0-5f3c-bef3-48921bcc7ee5", 00:16:32.375 "is_configured": true, 00:16:32.375 "data_offset": 0, 00:16:32.375 "data_size": 65536 00:16:32.375 }, 00:16:32.375 { 00:16:32.375 "name": "BaseBdev2", 00:16:32.376 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:32.376 "is_configured": true, 00:16:32.376 "data_offset": 0, 00:16:32.376 "data_size": 65536 00:16:32.376 }, 00:16:32.376 { 00:16:32.376 "name": "BaseBdev3", 00:16:32.376 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:32.376 "is_configured": true, 00:16:32.376 "data_offset": 0, 00:16:32.376 "data_size": 65536 00:16:32.376 }, 00:16:32.376 { 00:16:32.376 "name": "BaseBdev4", 00:16:32.376 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:32.376 "is_configured": true, 00:16:32.376 "data_offset": 0, 00:16:32.376 "data_size": 65536 00:16:32.376 } 00:16:32.376 ] 00:16:32.376 }' 00:16:32.376 12:42:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.376 12:42:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.636 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:32.636 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:32.636 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.636 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.636 [2024-12-14 12:42:32.335525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.636 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:32.896 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:32.896 [2024-12-14 12:42:32.610896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:33.156 /dev/nbd0 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:33.156 1+0 records in 00:16:33.156 1+0 records out 00:16:33.156 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036074 s, 11.4 MB/s 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:33.156 12:42:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:33.416 512+0 records in 00:16:33.416 512+0 records out 00:16:33.416 100663296 bytes (101 MB, 96 MiB) copied, 0.46134 s, 218 MB/s 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:33.676 [2024-12-14 12:42:33.340703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.676 [2024-12-14 12:42:33.387440] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.676 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.677 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.677 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.936 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.936 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.936 "name": "raid_bdev1", 00:16:33.936 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:33.936 "strip_size_kb": 64, 00:16:33.936 "state": "online", 00:16:33.936 "raid_level": "raid5f", 00:16:33.936 "superblock": false, 00:16:33.937 "num_base_bdevs": 4, 00:16:33.937 "num_base_bdevs_discovered": 3, 00:16:33.937 "num_base_bdevs_operational": 3, 00:16:33.937 "base_bdevs_list": [ 00:16:33.937 { 00:16:33.937 "name": null, 00:16:33.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.937 "is_configured": false, 00:16:33.937 "data_offset": 0, 00:16:33.937 "data_size": 65536 00:16:33.937 }, 00:16:33.937 { 00:16:33.937 "name": "BaseBdev2", 00:16:33.937 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:33.937 "is_configured": true, 00:16:33.937 "data_offset": 0, 00:16:33.937 "data_size": 65536 00:16:33.937 }, 00:16:33.937 { 00:16:33.937 "name": "BaseBdev3", 00:16:33.937 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:33.937 "is_configured": true, 00:16:33.937 "data_offset": 0, 00:16:33.937 "data_size": 65536 00:16:33.937 }, 00:16:33.937 { 00:16:33.937 "name": "BaseBdev4", 00:16:33.937 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:33.937 "is_configured": true, 00:16:33.937 "data_offset": 0, 00:16:33.937 "data_size": 65536 00:16:33.937 } 00:16:33.937 ] 00:16:33.937 }' 00:16:33.937 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.937 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.197 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:34.197 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.197 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.197 [2024-12-14 12:42:33.898623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.197 [2024-12-14 12:42:33.914024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:34.197 12:42:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.197 12:42:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:34.197 [2024-12-14 12:42:33.923651] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.578 "name": "raid_bdev1", 00:16:35.578 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:35.578 "strip_size_kb": 64, 00:16:35.578 "state": "online", 00:16:35.578 "raid_level": "raid5f", 00:16:35.578 "superblock": false, 00:16:35.578 "num_base_bdevs": 4, 00:16:35.578 "num_base_bdevs_discovered": 4, 00:16:35.578 "num_base_bdevs_operational": 4, 00:16:35.578 "process": { 00:16:35.578 "type": "rebuild", 00:16:35.578 "target": "spare", 00:16:35.578 "progress": { 00:16:35.578 "blocks": 19200, 00:16:35.578 "percent": 9 00:16:35.578 } 00:16:35.578 }, 00:16:35.578 "base_bdevs_list": [ 00:16:35.578 { 00:16:35.578 "name": "spare", 00:16:35.578 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:35.578 "is_configured": true, 00:16:35.578 "data_offset": 0, 00:16:35.578 "data_size": 65536 00:16:35.578 }, 00:16:35.578 { 00:16:35.578 "name": "BaseBdev2", 00:16:35.578 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:35.578 "is_configured": true, 00:16:35.578 "data_offset": 0, 00:16:35.578 "data_size": 65536 00:16:35.578 }, 00:16:35.578 { 00:16:35.578 "name": "BaseBdev3", 00:16:35.578 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:35.578 "is_configured": true, 00:16:35.578 "data_offset": 0, 00:16:35.578 "data_size": 65536 00:16:35.578 }, 00:16:35.578 { 00:16:35.578 "name": "BaseBdev4", 00:16:35.578 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:35.578 "is_configured": true, 00:16:35.578 "data_offset": 0, 00:16:35.578 "data_size": 65536 00:16:35.578 } 00:16:35.578 ] 00:16:35.578 }' 00:16:35.578 12:42:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.578 [2024-12-14 12:42:35.082918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.578 [2024-12-14 12:42:35.131860] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:35.578 [2024-12-14 12:42:35.131999] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.578 [2024-12-14 12:42:35.132018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.578 [2024-12-14 12:42:35.132028] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.578 "name": "raid_bdev1", 00:16:35.578 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:35.578 "strip_size_kb": 64, 00:16:35.578 "state": "online", 00:16:35.578 "raid_level": "raid5f", 00:16:35.578 "superblock": false, 00:16:35.578 "num_base_bdevs": 4, 00:16:35.578 "num_base_bdevs_discovered": 3, 00:16:35.578 "num_base_bdevs_operational": 3, 00:16:35.578 "base_bdevs_list": [ 00:16:35.578 { 00:16:35.578 "name": null, 00:16:35.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.578 "is_configured": false, 00:16:35.578 "data_offset": 0, 00:16:35.578 "data_size": 65536 00:16:35.578 }, 00:16:35.578 { 00:16:35.578 "name": "BaseBdev2", 00:16:35.578 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:35.578 "is_configured": true, 00:16:35.578 "data_offset": 0, 00:16:35.578 "data_size": 65536 00:16:35.578 }, 00:16:35.578 { 00:16:35.578 "name": "BaseBdev3", 00:16:35.578 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:35.578 "is_configured": true, 00:16:35.578 "data_offset": 0, 00:16:35.578 "data_size": 65536 00:16:35.578 }, 00:16:35.578 { 00:16:35.578 "name": "BaseBdev4", 00:16:35.578 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:35.578 "is_configured": true, 00:16:35.578 "data_offset": 0, 00:16:35.578 "data_size": 65536 00:16:35.578 } 00:16:35.578 ] 00:16:35.578 }' 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.578 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.148 "name": "raid_bdev1", 00:16:36.148 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:36.148 "strip_size_kb": 64, 00:16:36.148 "state": "online", 00:16:36.148 "raid_level": "raid5f", 00:16:36.148 "superblock": false, 00:16:36.148 "num_base_bdevs": 4, 00:16:36.148 "num_base_bdevs_discovered": 3, 00:16:36.148 "num_base_bdevs_operational": 3, 00:16:36.148 "base_bdevs_list": [ 00:16:36.148 { 00:16:36.148 "name": null, 00:16:36.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.148 "is_configured": false, 00:16:36.148 "data_offset": 0, 00:16:36.148 "data_size": 65536 00:16:36.148 }, 00:16:36.148 { 00:16:36.148 "name": "BaseBdev2", 00:16:36.148 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:36.148 "is_configured": true, 00:16:36.148 "data_offset": 0, 00:16:36.148 "data_size": 65536 00:16:36.148 }, 00:16:36.148 { 00:16:36.148 "name": "BaseBdev3", 00:16:36.148 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:36.148 "is_configured": true, 00:16:36.148 "data_offset": 0, 00:16:36.148 "data_size": 65536 00:16:36.148 }, 00:16:36.148 { 00:16:36.148 "name": "BaseBdev4", 00:16:36.148 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:36.148 "is_configured": true, 00:16:36.148 "data_offset": 0, 00:16:36.148 "data_size": 65536 00:16:36.148 } 00:16:36.148 ] 00:16:36.148 }' 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.148 [2024-12-14 12:42:35.761011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.148 [2024-12-14 12:42:35.776494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.148 12:42:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:36.148 [2024-12-14 12:42:35.786012] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.087 12:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.347 "name": "raid_bdev1", 00:16:37.347 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:37.347 "strip_size_kb": 64, 00:16:37.347 "state": "online", 00:16:37.347 "raid_level": "raid5f", 00:16:37.347 "superblock": false, 00:16:37.347 "num_base_bdevs": 4, 00:16:37.347 "num_base_bdevs_discovered": 4, 00:16:37.347 "num_base_bdevs_operational": 4, 00:16:37.347 "process": { 00:16:37.347 "type": "rebuild", 00:16:37.347 "target": "spare", 00:16:37.347 "progress": { 00:16:37.347 "blocks": 19200, 00:16:37.347 "percent": 9 00:16:37.347 } 00:16:37.347 }, 00:16:37.347 "base_bdevs_list": [ 00:16:37.347 { 00:16:37.347 "name": "spare", 00:16:37.347 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:37.347 "is_configured": true, 00:16:37.347 "data_offset": 0, 00:16:37.347 "data_size": 65536 00:16:37.347 }, 00:16:37.347 { 00:16:37.347 "name": "BaseBdev2", 00:16:37.347 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:37.347 "is_configured": true, 00:16:37.347 "data_offset": 0, 00:16:37.347 "data_size": 65536 00:16:37.347 }, 00:16:37.347 { 00:16:37.347 "name": "BaseBdev3", 00:16:37.347 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:37.347 "is_configured": true, 00:16:37.347 "data_offset": 0, 00:16:37.347 "data_size": 65536 00:16:37.347 }, 00:16:37.347 { 00:16:37.347 "name": "BaseBdev4", 00:16:37.347 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:37.347 "is_configured": true, 00:16:37.347 "data_offset": 0, 00:16:37.347 "data_size": 65536 00:16:37.347 } 00:16:37.347 ] 00:16:37.347 }' 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=611 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.347 12:42:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.348 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.348 "name": "raid_bdev1", 00:16:37.348 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:37.348 "strip_size_kb": 64, 00:16:37.348 "state": "online", 00:16:37.348 "raid_level": "raid5f", 00:16:37.348 "superblock": false, 00:16:37.348 "num_base_bdevs": 4, 00:16:37.348 "num_base_bdevs_discovered": 4, 00:16:37.348 "num_base_bdevs_operational": 4, 00:16:37.348 "process": { 00:16:37.348 "type": "rebuild", 00:16:37.348 "target": "spare", 00:16:37.348 "progress": { 00:16:37.348 "blocks": 21120, 00:16:37.348 "percent": 10 00:16:37.348 } 00:16:37.348 }, 00:16:37.348 "base_bdevs_list": [ 00:16:37.348 { 00:16:37.348 "name": "spare", 00:16:37.348 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:37.348 "is_configured": true, 00:16:37.348 "data_offset": 0, 00:16:37.348 "data_size": 65536 00:16:37.348 }, 00:16:37.348 { 00:16:37.348 "name": "BaseBdev2", 00:16:37.348 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:37.348 "is_configured": true, 00:16:37.348 "data_offset": 0, 00:16:37.348 "data_size": 65536 00:16:37.348 }, 00:16:37.348 { 00:16:37.348 "name": "BaseBdev3", 00:16:37.348 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:37.348 "is_configured": true, 00:16:37.348 "data_offset": 0, 00:16:37.348 "data_size": 65536 00:16:37.348 }, 00:16:37.348 { 00:16:37.348 "name": "BaseBdev4", 00:16:37.348 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:37.348 "is_configured": true, 00:16:37.348 "data_offset": 0, 00:16:37.348 "data_size": 65536 00:16:37.348 } 00:16:37.348 ] 00:16:37.348 }' 00:16:37.348 12:42:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.348 12:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.348 12:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.348 12:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.348 12:42:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.729 12:42:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.730 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.730 "name": "raid_bdev1", 00:16:38.730 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:38.730 "strip_size_kb": 64, 00:16:38.730 "state": "online", 00:16:38.730 "raid_level": "raid5f", 00:16:38.730 "superblock": false, 00:16:38.730 "num_base_bdevs": 4, 00:16:38.730 "num_base_bdevs_discovered": 4, 00:16:38.730 "num_base_bdevs_operational": 4, 00:16:38.730 "process": { 00:16:38.730 "type": "rebuild", 00:16:38.730 "target": "spare", 00:16:38.730 "progress": { 00:16:38.730 "blocks": 42240, 00:16:38.730 "percent": 21 00:16:38.730 } 00:16:38.730 }, 00:16:38.730 "base_bdevs_list": [ 00:16:38.730 { 00:16:38.730 "name": "spare", 00:16:38.730 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:38.730 "is_configured": true, 00:16:38.730 "data_offset": 0, 00:16:38.730 "data_size": 65536 00:16:38.730 }, 00:16:38.730 { 00:16:38.730 "name": "BaseBdev2", 00:16:38.730 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:38.730 "is_configured": true, 00:16:38.730 "data_offset": 0, 00:16:38.730 "data_size": 65536 00:16:38.730 }, 00:16:38.730 { 00:16:38.730 "name": "BaseBdev3", 00:16:38.730 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:38.730 "is_configured": true, 00:16:38.730 "data_offset": 0, 00:16:38.730 "data_size": 65536 00:16:38.730 }, 00:16:38.730 { 00:16:38.730 "name": "BaseBdev4", 00:16:38.730 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:38.730 "is_configured": true, 00:16:38.730 "data_offset": 0, 00:16:38.730 "data_size": 65536 00:16:38.730 } 00:16:38.730 ] 00:16:38.730 }' 00:16:38.730 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.730 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.730 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.730 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.730 12:42:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.687 "name": "raid_bdev1", 00:16:39.687 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:39.687 "strip_size_kb": 64, 00:16:39.687 "state": "online", 00:16:39.687 "raid_level": "raid5f", 00:16:39.687 "superblock": false, 00:16:39.687 "num_base_bdevs": 4, 00:16:39.687 "num_base_bdevs_discovered": 4, 00:16:39.687 "num_base_bdevs_operational": 4, 00:16:39.687 "process": { 00:16:39.687 "type": "rebuild", 00:16:39.687 "target": "spare", 00:16:39.687 "progress": { 00:16:39.687 "blocks": 65280, 00:16:39.687 "percent": 33 00:16:39.687 } 00:16:39.687 }, 00:16:39.687 "base_bdevs_list": [ 00:16:39.687 { 00:16:39.687 "name": "spare", 00:16:39.687 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:39.687 "is_configured": true, 00:16:39.687 "data_offset": 0, 00:16:39.687 "data_size": 65536 00:16:39.687 }, 00:16:39.687 { 00:16:39.687 "name": "BaseBdev2", 00:16:39.687 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:39.687 "is_configured": true, 00:16:39.687 "data_offset": 0, 00:16:39.687 "data_size": 65536 00:16:39.687 }, 00:16:39.687 { 00:16:39.687 "name": "BaseBdev3", 00:16:39.687 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:39.687 "is_configured": true, 00:16:39.687 "data_offset": 0, 00:16:39.687 "data_size": 65536 00:16:39.687 }, 00:16:39.687 { 00:16:39.687 "name": "BaseBdev4", 00:16:39.687 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:39.687 "is_configured": true, 00:16:39.687 "data_offset": 0, 00:16:39.687 "data_size": 65536 00:16:39.687 } 00:16:39.687 ] 00:16:39.687 }' 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.687 12:42:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.638 12:42:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.898 12:42:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.898 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.898 "name": "raid_bdev1", 00:16:40.898 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:40.898 "strip_size_kb": 64, 00:16:40.898 "state": "online", 00:16:40.898 "raid_level": "raid5f", 00:16:40.898 "superblock": false, 00:16:40.898 "num_base_bdevs": 4, 00:16:40.898 "num_base_bdevs_discovered": 4, 00:16:40.898 "num_base_bdevs_operational": 4, 00:16:40.898 "process": { 00:16:40.898 "type": "rebuild", 00:16:40.898 "target": "spare", 00:16:40.898 "progress": { 00:16:40.898 "blocks": 86400, 00:16:40.898 "percent": 43 00:16:40.898 } 00:16:40.898 }, 00:16:40.898 "base_bdevs_list": [ 00:16:40.898 { 00:16:40.898 "name": "spare", 00:16:40.898 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:40.898 "is_configured": true, 00:16:40.898 "data_offset": 0, 00:16:40.898 "data_size": 65536 00:16:40.898 }, 00:16:40.898 { 00:16:40.898 "name": "BaseBdev2", 00:16:40.898 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:40.898 "is_configured": true, 00:16:40.898 "data_offset": 0, 00:16:40.898 "data_size": 65536 00:16:40.898 }, 00:16:40.898 { 00:16:40.898 "name": "BaseBdev3", 00:16:40.898 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:40.898 "is_configured": true, 00:16:40.898 "data_offset": 0, 00:16:40.898 "data_size": 65536 00:16:40.898 }, 00:16:40.898 { 00:16:40.898 "name": "BaseBdev4", 00:16:40.898 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:40.898 "is_configured": true, 00:16:40.898 "data_offset": 0, 00:16:40.898 "data_size": 65536 00:16:40.898 } 00:16:40.898 ] 00:16:40.898 }' 00:16:40.898 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.898 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.898 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.898 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.898 12:42:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.837 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.837 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.837 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.837 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.837 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.837 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.837 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.838 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.838 12:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.838 12:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.838 12:42:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.838 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.838 "name": "raid_bdev1", 00:16:41.838 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:41.838 "strip_size_kb": 64, 00:16:41.838 "state": "online", 00:16:41.838 "raid_level": "raid5f", 00:16:41.838 "superblock": false, 00:16:41.838 "num_base_bdevs": 4, 00:16:41.838 "num_base_bdevs_discovered": 4, 00:16:41.838 "num_base_bdevs_operational": 4, 00:16:41.838 "process": { 00:16:41.838 "type": "rebuild", 00:16:41.838 "target": "spare", 00:16:41.838 "progress": { 00:16:41.838 "blocks": 109440, 00:16:41.838 "percent": 55 00:16:41.838 } 00:16:41.838 }, 00:16:41.838 "base_bdevs_list": [ 00:16:41.838 { 00:16:41.838 "name": "spare", 00:16:41.838 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:41.838 "is_configured": true, 00:16:41.838 "data_offset": 0, 00:16:41.838 "data_size": 65536 00:16:41.838 }, 00:16:41.838 { 00:16:41.838 "name": "BaseBdev2", 00:16:41.838 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:41.838 "is_configured": true, 00:16:41.838 "data_offset": 0, 00:16:41.838 "data_size": 65536 00:16:41.838 }, 00:16:41.838 { 00:16:41.838 "name": "BaseBdev3", 00:16:41.838 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:41.838 "is_configured": true, 00:16:41.838 "data_offset": 0, 00:16:41.838 "data_size": 65536 00:16:41.838 }, 00:16:41.838 { 00:16:41.838 "name": "BaseBdev4", 00:16:41.838 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:41.838 "is_configured": true, 00:16:41.838 "data_offset": 0, 00:16:41.838 "data_size": 65536 00:16:41.838 } 00:16:41.838 ] 00:16:41.838 }' 00:16:41.838 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.096 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.096 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.096 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.096 12:42:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:43.036 "name": "raid_bdev1", 00:16:43.036 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:43.036 "strip_size_kb": 64, 00:16:43.036 "state": "online", 00:16:43.036 "raid_level": "raid5f", 00:16:43.036 "superblock": false, 00:16:43.036 "num_base_bdevs": 4, 00:16:43.036 "num_base_bdevs_discovered": 4, 00:16:43.036 "num_base_bdevs_operational": 4, 00:16:43.036 "process": { 00:16:43.036 "type": "rebuild", 00:16:43.036 "target": "spare", 00:16:43.036 "progress": { 00:16:43.036 "blocks": 130560, 00:16:43.036 "percent": 66 00:16:43.036 } 00:16:43.036 }, 00:16:43.036 "base_bdevs_list": [ 00:16:43.036 { 00:16:43.036 "name": "spare", 00:16:43.036 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:43.036 "is_configured": true, 00:16:43.036 "data_offset": 0, 00:16:43.036 "data_size": 65536 00:16:43.036 }, 00:16:43.036 { 00:16:43.036 "name": "BaseBdev2", 00:16:43.036 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:43.036 "is_configured": true, 00:16:43.036 "data_offset": 0, 00:16:43.036 "data_size": 65536 00:16:43.036 }, 00:16:43.036 { 00:16:43.036 "name": "BaseBdev3", 00:16:43.036 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:43.036 "is_configured": true, 00:16:43.036 "data_offset": 0, 00:16:43.036 "data_size": 65536 00:16:43.036 }, 00:16:43.036 { 00:16:43.036 "name": "BaseBdev4", 00:16:43.036 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:43.036 "is_configured": true, 00:16:43.036 "data_offset": 0, 00:16:43.036 "data_size": 65536 00:16:43.036 } 00:16:43.036 ] 00:16:43.036 }' 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:43.036 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.295 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:43.295 12:42:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.235 "name": "raid_bdev1", 00:16:44.235 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:44.235 "strip_size_kb": 64, 00:16:44.235 "state": "online", 00:16:44.235 "raid_level": "raid5f", 00:16:44.235 "superblock": false, 00:16:44.235 "num_base_bdevs": 4, 00:16:44.235 "num_base_bdevs_discovered": 4, 00:16:44.235 "num_base_bdevs_operational": 4, 00:16:44.235 "process": { 00:16:44.235 "type": "rebuild", 00:16:44.235 "target": "spare", 00:16:44.235 "progress": { 00:16:44.235 "blocks": 151680, 00:16:44.235 "percent": 77 00:16:44.235 } 00:16:44.235 }, 00:16:44.235 "base_bdevs_list": [ 00:16:44.235 { 00:16:44.235 "name": "spare", 00:16:44.235 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:44.235 "is_configured": true, 00:16:44.235 "data_offset": 0, 00:16:44.235 "data_size": 65536 00:16:44.235 }, 00:16:44.235 { 00:16:44.235 "name": "BaseBdev2", 00:16:44.235 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:44.235 "is_configured": true, 00:16:44.235 "data_offset": 0, 00:16:44.235 "data_size": 65536 00:16:44.235 }, 00:16:44.235 { 00:16:44.235 "name": "BaseBdev3", 00:16:44.235 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:44.235 "is_configured": true, 00:16:44.235 "data_offset": 0, 00:16:44.235 "data_size": 65536 00:16:44.235 }, 00:16:44.235 { 00:16:44.235 "name": "BaseBdev4", 00:16:44.235 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:44.235 "is_configured": true, 00:16:44.235 "data_offset": 0, 00:16:44.235 "data_size": 65536 00:16:44.235 } 00:16:44.235 ] 00:16:44.235 }' 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.235 12:42:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.615 12:42:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.615 12:42:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.615 "name": "raid_bdev1", 00:16:45.615 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:45.615 "strip_size_kb": 64, 00:16:45.615 "state": "online", 00:16:45.615 "raid_level": "raid5f", 00:16:45.615 "superblock": false, 00:16:45.615 "num_base_bdevs": 4, 00:16:45.615 "num_base_bdevs_discovered": 4, 00:16:45.615 "num_base_bdevs_operational": 4, 00:16:45.615 "process": { 00:16:45.615 "type": "rebuild", 00:16:45.615 "target": "spare", 00:16:45.615 "progress": { 00:16:45.615 "blocks": 174720, 00:16:45.615 "percent": 88 00:16:45.615 } 00:16:45.615 }, 00:16:45.615 "base_bdevs_list": [ 00:16:45.615 { 00:16:45.615 "name": "spare", 00:16:45.615 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:45.615 "is_configured": true, 00:16:45.615 "data_offset": 0, 00:16:45.615 "data_size": 65536 00:16:45.615 }, 00:16:45.615 { 00:16:45.615 "name": "BaseBdev2", 00:16:45.615 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:45.615 "is_configured": true, 00:16:45.615 "data_offset": 0, 00:16:45.615 "data_size": 65536 00:16:45.615 }, 00:16:45.615 { 00:16:45.615 "name": "BaseBdev3", 00:16:45.615 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:45.615 "is_configured": true, 00:16:45.615 "data_offset": 0, 00:16:45.615 "data_size": 65536 00:16:45.615 }, 00:16:45.615 { 00:16:45.615 "name": "BaseBdev4", 00:16:45.615 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:45.615 "is_configured": true, 00:16:45.615 "data_offset": 0, 00:16:45.615 "data_size": 65536 00:16:45.615 } 00:16:45.615 ] 00:16:45.615 }' 00:16:45.615 12:42:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.615 12:42:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.615 12:42:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.615 12:42:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.615 12:42:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.553 [2024-12-14 12:42:46.152724] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:46.553 [2024-12-14 12:42:46.152833] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:46.553 [2024-12-14 12:42:46.152916] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.553 "name": "raid_bdev1", 00:16:46.553 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:46.553 "strip_size_kb": 64, 00:16:46.553 "state": "online", 00:16:46.553 "raid_level": "raid5f", 00:16:46.553 "superblock": false, 00:16:46.553 "num_base_bdevs": 4, 00:16:46.553 "num_base_bdevs_discovered": 4, 00:16:46.553 "num_base_bdevs_operational": 4, 00:16:46.553 "process": { 00:16:46.553 "type": "rebuild", 00:16:46.553 "target": "spare", 00:16:46.553 "progress": { 00:16:46.553 "blocks": 195840, 00:16:46.553 "percent": 99 00:16:46.553 } 00:16:46.553 }, 00:16:46.553 "base_bdevs_list": [ 00:16:46.553 { 00:16:46.553 "name": "spare", 00:16:46.553 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:46.553 "is_configured": true, 00:16:46.553 "data_offset": 0, 00:16:46.553 "data_size": 65536 00:16:46.553 }, 00:16:46.553 { 00:16:46.553 "name": "BaseBdev2", 00:16:46.553 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:46.553 "is_configured": true, 00:16:46.553 "data_offset": 0, 00:16:46.553 "data_size": 65536 00:16:46.553 }, 00:16:46.553 { 00:16:46.553 "name": "BaseBdev3", 00:16:46.553 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:46.553 "is_configured": true, 00:16:46.553 "data_offset": 0, 00:16:46.553 "data_size": 65536 00:16:46.553 }, 00:16:46.553 { 00:16:46.553 "name": "BaseBdev4", 00:16:46.553 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:46.553 "is_configured": true, 00:16:46.553 "data_offset": 0, 00:16:46.553 "data_size": 65536 00:16:46.553 } 00:16:46.553 ] 00:16:46.553 }' 00:16:46.553 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.554 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.554 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.554 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.554 12:42:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.935 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.935 "name": "raid_bdev1", 00:16:47.935 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:47.935 "strip_size_kb": 64, 00:16:47.935 "state": "online", 00:16:47.935 "raid_level": "raid5f", 00:16:47.935 "superblock": false, 00:16:47.935 "num_base_bdevs": 4, 00:16:47.935 "num_base_bdevs_discovered": 4, 00:16:47.935 "num_base_bdevs_operational": 4, 00:16:47.935 "base_bdevs_list": [ 00:16:47.935 { 00:16:47.935 "name": "spare", 00:16:47.935 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:47.935 "is_configured": true, 00:16:47.935 "data_offset": 0, 00:16:47.935 "data_size": 65536 00:16:47.935 }, 00:16:47.935 { 00:16:47.935 "name": "BaseBdev2", 00:16:47.935 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:47.935 "is_configured": true, 00:16:47.935 "data_offset": 0, 00:16:47.935 "data_size": 65536 00:16:47.935 }, 00:16:47.935 { 00:16:47.935 "name": "BaseBdev3", 00:16:47.935 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:47.935 "is_configured": true, 00:16:47.935 "data_offset": 0, 00:16:47.935 "data_size": 65536 00:16:47.935 }, 00:16:47.935 { 00:16:47.935 "name": "BaseBdev4", 00:16:47.935 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:47.935 "is_configured": true, 00:16:47.935 "data_offset": 0, 00:16:47.935 "data_size": 65536 00:16:47.935 } 00:16:47.935 ] 00:16:47.935 }' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.936 "name": "raid_bdev1", 00:16:47.936 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:47.936 "strip_size_kb": 64, 00:16:47.936 "state": "online", 00:16:47.936 "raid_level": "raid5f", 00:16:47.936 "superblock": false, 00:16:47.936 "num_base_bdevs": 4, 00:16:47.936 "num_base_bdevs_discovered": 4, 00:16:47.936 "num_base_bdevs_operational": 4, 00:16:47.936 "base_bdevs_list": [ 00:16:47.936 { 00:16:47.936 "name": "spare", 00:16:47.936 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 0, 00:16:47.936 "data_size": 65536 00:16:47.936 }, 00:16:47.936 { 00:16:47.936 "name": "BaseBdev2", 00:16:47.936 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 0, 00:16:47.936 "data_size": 65536 00:16:47.936 }, 00:16:47.936 { 00:16:47.936 "name": "BaseBdev3", 00:16:47.936 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 0, 00:16:47.936 "data_size": 65536 00:16:47.936 }, 00:16:47.936 { 00:16:47.936 "name": "BaseBdev4", 00:16:47.936 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 0, 00:16:47.936 "data_size": 65536 00:16:47.936 } 00:16:47.936 ] 00:16:47.936 }' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.936 "name": "raid_bdev1", 00:16:47.936 "uuid": "8a0474de-e36a-42c4-ab3f-fe4ac8dafda1", 00:16:47.936 "strip_size_kb": 64, 00:16:47.936 "state": "online", 00:16:47.936 "raid_level": "raid5f", 00:16:47.936 "superblock": false, 00:16:47.936 "num_base_bdevs": 4, 00:16:47.936 "num_base_bdevs_discovered": 4, 00:16:47.936 "num_base_bdevs_operational": 4, 00:16:47.936 "base_bdevs_list": [ 00:16:47.936 { 00:16:47.936 "name": "spare", 00:16:47.936 "uuid": "e12b7857-3770-5666-b20b-f894ccb33602", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 0, 00:16:47.936 "data_size": 65536 00:16:47.936 }, 00:16:47.936 { 00:16:47.936 "name": "BaseBdev2", 00:16:47.936 "uuid": "d9d2b0f5-0ef4-5561-b44c-0d53dbd8fdeb", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 0, 00:16:47.936 "data_size": 65536 00:16:47.936 }, 00:16:47.936 { 00:16:47.936 "name": "BaseBdev3", 00:16:47.936 "uuid": "33c6f17e-a321-5d42-8543-36c9a4cc1a6a", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 0, 00:16:47.936 "data_size": 65536 00:16:47.936 }, 00:16:47.936 { 00:16:47.936 "name": "BaseBdev4", 00:16:47.936 "uuid": "26ce0f6f-9e45-5311-aa0d-a30902e174f6", 00:16:47.936 "is_configured": true, 00:16:47.936 "data_offset": 0, 00:16:47.936 "data_size": 65536 00:16:47.936 } 00:16:47.936 ] 00:16:47.936 }' 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.936 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.506 12:42:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.506 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.506 12:42:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.506 [2024-12-14 12:42:48.004587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.506 [2024-12-14 12:42:48.004621] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.506 [2024-12-14 12:42:48.004714] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.506 [2024-12-14 12:42:48.004804] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.506 [2024-12-14 12:42:48.004814] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:48.506 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:48.766 /dev/nbd0 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:48.766 1+0 records in 00:16:48.766 1+0 records out 00:16:48.766 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000430635 s, 9.5 MB/s 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:48.766 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:48.766 /dev/nbd1 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.026 1+0 records in 00:16:49.026 1+0 records out 00:16:49.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432646 s, 9.5 MB/s 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.026 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.286 12:42:48 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 86337 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 86337 ']' 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 86337 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86337 00:16:49.547 killing process with pid 86337 00:16:49.547 Received shutdown signal, test time was about 60.000000 seconds 00:16:49.547 00:16:49.547 Latency(us) 00:16:49.547 [2024-12-14T12:42:49.285Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.547 [2024-12-14T12:42:49.285Z] =================================================================================================================== 00:16:49.547 [2024-12-14T12:42:49.285Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86337' 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 86337 00:16:49.547 [2024-12-14 12:42:49.221203] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.547 12:42:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 86337 00:16:50.117 [2024-12-14 12:42:49.703777] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:51.499 00:16:51.499 real 0m20.125s 00:16:51.499 user 0m24.236s 00:16:51.499 sys 0m2.173s 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:51.499 ************************************ 00:16:51.499 END TEST raid5f_rebuild_test 00:16:51.499 ************************************ 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.499 12:42:50 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:51.499 12:42:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:51.499 12:42:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:51.499 12:42:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:51.499 ************************************ 00:16:51.499 START TEST raid5f_rebuild_test_sb 00:16:51.499 ************************************ 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=86856 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 86856 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 86856 ']' 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:51.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:51.499 12:42:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.499 [2024-12-14 12:42:50.968268] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:51.499 [2024-12-14 12:42:50.968470] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:51.499 Zero copy mechanism will not be used. 00:16:51.499 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86856 ] 00:16:51.499 [2024-12-14 12:42:51.143025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.759 [2024-12-14 12:42:51.254700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.759 [2024-12-14 12:42:51.442391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.759 [2024-12-14 12:42:51.442477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 BaseBdev1_malloc 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 [2024-12-14 12:42:51.842912] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:52.329 [2024-12-14 12:42:51.843026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.329 [2024-12-14 12:42:51.843065] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:52.329 [2024-12-14 12:42:51.843077] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.329 [2024-12-14 12:42:51.845222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.329 [2024-12-14 12:42:51.845261] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:52.329 BaseBdev1 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 BaseBdev2_malloc 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 [2024-12-14 12:42:51.894499] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:52.329 [2024-12-14 12:42:51.894560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.329 [2024-12-14 12:42:51.894578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:52.329 [2024-12-14 12:42:51.894589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.329 [2024-12-14 12:42:51.896571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.329 [2024-12-14 12:42:51.896610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:52.329 BaseBdev2 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 BaseBdev3_malloc 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 [2024-12-14 12:42:51.972988] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:52.329 [2024-12-14 12:42:51.973064] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.329 [2024-12-14 12:42:51.973085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:52.329 [2024-12-14 12:42:51.973096] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.329 [2024-12-14 12:42:51.975144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.329 [2024-12-14 12:42:51.975181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:52.329 BaseBdev3 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.329 12:42:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 BaseBdev4_malloc 00:16:52.329 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.329 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:52.329 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.329 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.329 [2024-12-14 12:42:52.026852] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:52.329 [2024-12-14 12:42:52.026966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.329 [2024-12-14 12:42:52.026990] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:52.330 [2024-12-14 12:42:52.027000] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.330 [2024-12-14 12:42:52.028931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.330 [2024-12-14 12:42:52.028971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:52.330 BaseBdev4 00:16:52.330 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.330 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:52.330 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.330 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.590 spare_malloc 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.590 spare_delay 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.590 [2024-12-14 12:42:52.091788] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:52.590 [2024-12-14 12:42:52.091888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.590 [2024-12-14 12:42:52.091924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:52.590 [2024-12-14 12:42:52.091934] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.590 [2024-12-14 12:42:52.093896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.590 [2024-12-14 12:42:52.093935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:52.590 spare 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.590 [2024-12-14 12:42:52.103830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.590 [2024-12-14 12:42:52.105541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.590 [2024-12-14 12:42:52.105603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.590 [2024-12-14 12:42:52.105653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:52.590 [2024-12-14 12:42:52.105837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:52.590 [2024-12-14 12:42:52.105850] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:52.590 [2024-12-14 12:42:52.106091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:52.590 [2024-12-14 12:42:52.113255] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:52.590 [2024-12-14 12:42:52.113328] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:52.590 [2024-12-14 12:42:52.113530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.590 "name": "raid_bdev1", 00:16:52.590 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:52.590 "strip_size_kb": 64, 00:16:52.590 "state": "online", 00:16:52.590 "raid_level": "raid5f", 00:16:52.590 "superblock": true, 00:16:52.590 "num_base_bdevs": 4, 00:16:52.590 "num_base_bdevs_discovered": 4, 00:16:52.590 "num_base_bdevs_operational": 4, 00:16:52.590 "base_bdevs_list": [ 00:16:52.590 { 00:16:52.590 "name": "BaseBdev1", 00:16:52.590 "uuid": "d39f6670-ca9b-5f48-97ea-5d9029954498", 00:16:52.590 "is_configured": true, 00:16:52.590 "data_offset": 2048, 00:16:52.590 "data_size": 63488 00:16:52.590 }, 00:16:52.590 { 00:16:52.590 "name": "BaseBdev2", 00:16:52.590 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:52.590 "is_configured": true, 00:16:52.590 "data_offset": 2048, 00:16:52.590 "data_size": 63488 00:16:52.590 }, 00:16:52.590 { 00:16:52.590 "name": "BaseBdev3", 00:16:52.590 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:52.590 "is_configured": true, 00:16:52.590 "data_offset": 2048, 00:16:52.590 "data_size": 63488 00:16:52.590 }, 00:16:52.590 { 00:16:52.590 "name": "BaseBdev4", 00:16:52.590 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:52.590 "is_configured": true, 00:16:52.590 "data_offset": 2048, 00:16:52.590 "data_size": 63488 00:16:52.590 } 00:16:52.590 ] 00:16:52.590 }' 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.590 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:52.850 [2024-12-14 12:42:52.521386] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.850 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:53.109 [2024-12-14 12:42:52.796747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:53.109 /dev/nbd0 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.109 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.368 1+0 records in 00:16:53.368 1+0 records out 00:16:53.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523232 s, 7.8 MB/s 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:53.368 12:42:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:53.627 496+0 records in 00:16:53.627 496+0 records out 00:16:53.627 97517568 bytes (98 MB, 93 MiB) copied, 0.438378 s, 222 MB/s 00:16:53.627 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:53.627 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.627 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:53.627 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:53.627 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:53.627 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.627 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:53.887 [2024-12-14 12:42:53.493757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.887 [2024-12-14 12:42:53.532396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.887 "name": "raid_bdev1", 00:16:53.887 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:53.887 "strip_size_kb": 64, 00:16:53.887 "state": "online", 00:16:53.887 "raid_level": "raid5f", 00:16:53.887 "superblock": true, 00:16:53.887 "num_base_bdevs": 4, 00:16:53.887 "num_base_bdevs_discovered": 3, 00:16:53.887 "num_base_bdevs_operational": 3, 00:16:53.887 "base_bdevs_list": [ 00:16:53.887 { 00:16:53.887 "name": null, 00:16:53.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.887 "is_configured": false, 00:16:53.887 "data_offset": 0, 00:16:53.887 "data_size": 63488 00:16:53.887 }, 00:16:53.887 { 00:16:53.887 "name": "BaseBdev2", 00:16:53.887 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:53.887 "is_configured": true, 00:16:53.887 "data_offset": 2048, 00:16:53.887 "data_size": 63488 00:16:53.887 }, 00:16:53.887 { 00:16:53.887 "name": "BaseBdev3", 00:16:53.887 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:53.887 "is_configured": true, 00:16:53.887 "data_offset": 2048, 00:16:53.887 "data_size": 63488 00:16:53.887 }, 00:16:53.887 { 00:16:53.887 "name": "BaseBdev4", 00:16:53.887 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:53.887 "is_configured": true, 00:16:53.887 "data_offset": 2048, 00:16:53.887 "data_size": 63488 00:16:53.887 } 00:16:53.887 ] 00:16:53.887 }' 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.887 12:42:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.472 12:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.472 12:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.472 12:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.472 [2024-12-14 12:42:54.011567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.472 [2024-12-14 12:42:54.027094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:54.472 12:42:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.472 12:42:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:54.472 [2024-12-14 12:42:54.036485] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.442 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.442 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.442 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.443 "name": "raid_bdev1", 00:16:55.443 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:55.443 "strip_size_kb": 64, 00:16:55.443 "state": "online", 00:16:55.443 "raid_level": "raid5f", 00:16:55.443 "superblock": true, 00:16:55.443 "num_base_bdevs": 4, 00:16:55.443 "num_base_bdevs_discovered": 4, 00:16:55.443 "num_base_bdevs_operational": 4, 00:16:55.443 "process": { 00:16:55.443 "type": "rebuild", 00:16:55.443 "target": "spare", 00:16:55.443 "progress": { 00:16:55.443 "blocks": 19200, 00:16:55.443 "percent": 10 00:16:55.443 } 00:16:55.443 }, 00:16:55.443 "base_bdevs_list": [ 00:16:55.443 { 00:16:55.443 "name": "spare", 00:16:55.443 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:16:55.443 "is_configured": true, 00:16:55.443 "data_offset": 2048, 00:16:55.443 "data_size": 63488 00:16:55.443 }, 00:16:55.443 { 00:16:55.443 "name": "BaseBdev2", 00:16:55.443 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:55.443 "is_configured": true, 00:16:55.443 "data_offset": 2048, 00:16:55.443 "data_size": 63488 00:16:55.443 }, 00:16:55.443 { 00:16:55.443 "name": "BaseBdev3", 00:16:55.443 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:55.443 "is_configured": true, 00:16:55.443 "data_offset": 2048, 00:16:55.443 "data_size": 63488 00:16:55.443 }, 00:16:55.443 { 00:16:55.443 "name": "BaseBdev4", 00:16:55.443 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:55.443 "is_configured": true, 00:16:55.443 "data_offset": 2048, 00:16:55.443 "data_size": 63488 00:16:55.443 } 00:16:55.443 ] 00:16:55.443 }' 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.443 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.705 [2024-12-14 12:42:55.191078] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.705 [2024-12-14 12:42:55.242749] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:55.705 [2024-12-14 12:42:55.242821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.705 [2024-12-14 12:42:55.242841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.705 [2024-12-14 12:42:55.242856] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.705 "name": "raid_bdev1", 00:16:55.705 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:55.705 "strip_size_kb": 64, 00:16:55.705 "state": "online", 00:16:55.705 "raid_level": "raid5f", 00:16:55.705 "superblock": true, 00:16:55.705 "num_base_bdevs": 4, 00:16:55.705 "num_base_bdevs_discovered": 3, 00:16:55.705 "num_base_bdevs_operational": 3, 00:16:55.705 "base_bdevs_list": [ 00:16:55.705 { 00:16:55.705 "name": null, 00:16:55.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.705 "is_configured": false, 00:16:55.705 "data_offset": 0, 00:16:55.705 "data_size": 63488 00:16:55.705 }, 00:16:55.705 { 00:16:55.705 "name": "BaseBdev2", 00:16:55.705 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:55.705 "is_configured": true, 00:16:55.705 "data_offset": 2048, 00:16:55.705 "data_size": 63488 00:16:55.705 }, 00:16:55.705 { 00:16:55.705 "name": "BaseBdev3", 00:16:55.705 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:55.705 "is_configured": true, 00:16:55.705 "data_offset": 2048, 00:16:55.705 "data_size": 63488 00:16:55.705 }, 00:16:55.705 { 00:16:55.705 "name": "BaseBdev4", 00:16:55.705 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:55.705 "is_configured": true, 00:16:55.705 "data_offset": 2048, 00:16:55.705 "data_size": 63488 00:16:55.705 } 00:16:55.705 ] 00:16:55.705 }' 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.705 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.275 "name": "raid_bdev1", 00:16:56.275 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:56.275 "strip_size_kb": 64, 00:16:56.275 "state": "online", 00:16:56.275 "raid_level": "raid5f", 00:16:56.275 "superblock": true, 00:16:56.275 "num_base_bdevs": 4, 00:16:56.275 "num_base_bdevs_discovered": 3, 00:16:56.275 "num_base_bdevs_operational": 3, 00:16:56.275 "base_bdevs_list": [ 00:16:56.275 { 00:16:56.275 "name": null, 00:16:56.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.275 "is_configured": false, 00:16:56.275 "data_offset": 0, 00:16:56.275 "data_size": 63488 00:16:56.275 }, 00:16:56.275 { 00:16:56.275 "name": "BaseBdev2", 00:16:56.275 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:56.275 "is_configured": true, 00:16:56.275 "data_offset": 2048, 00:16:56.275 "data_size": 63488 00:16:56.275 }, 00:16:56.275 { 00:16:56.275 "name": "BaseBdev3", 00:16:56.275 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:56.275 "is_configured": true, 00:16:56.275 "data_offset": 2048, 00:16:56.275 "data_size": 63488 00:16:56.275 }, 00:16:56.275 { 00:16:56.275 "name": "BaseBdev4", 00:16:56.275 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:56.275 "is_configured": true, 00:16:56.275 "data_offset": 2048, 00:16:56.275 "data_size": 63488 00:16:56.275 } 00:16:56.275 ] 00:16:56.275 }' 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.275 [2024-12-14 12:42:55.857786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:56.275 [2024-12-14 12:42:55.873076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.275 12:42:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:56.275 [2024-12-14 12:42:55.882694] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.212 "name": "raid_bdev1", 00:16:57.212 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:57.212 "strip_size_kb": 64, 00:16:57.212 "state": "online", 00:16:57.212 "raid_level": "raid5f", 00:16:57.212 "superblock": true, 00:16:57.212 "num_base_bdevs": 4, 00:16:57.212 "num_base_bdevs_discovered": 4, 00:16:57.212 "num_base_bdevs_operational": 4, 00:16:57.212 "process": { 00:16:57.212 "type": "rebuild", 00:16:57.212 "target": "spare", 00:16:57.212 "progress": { 00:16:57.212 "blocks": 19200, 00:16:57.212 "percent": 10 00:16:57.212 } 00:16:57.212 }, 00:16:57.212 "base_bdevs_list": [ 00:16:57.212 { 00:16:57.212 "name": "spare", 00:16:57.212 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:16:57.212 "is_configured": true, 00:16:57.212 "data_offset": 2048, 00:16:57.212 "data_size": 63488 00:16:57.212 }, 00:16:57.212 { 00:16:57.212 "name": "BaseBdev2", 00:16:57.212 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:57.212 "is_configured": true, 00:16:57.212 "data_offset": 2048, 00:16:57.212 "data_size": 63488 00:16:57.212 }, 00:16:57.212 { 00:16:57.212 "name": "BaseBdev3", 00:16:57.212 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:57.212 "is_configured": true, 00:16:57.212 "data_offset": 2048, 00:16:57.212 "data_size": 63488 00:16:57.212 }, 00:16:57.212 { 00:16:57.212 "name": "BaseBdev4", 00:16:57.212 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:57.212 "is_configured": true, 00:16:57.212 "data_offset": 2048, 00:16:57.212 "data_size": 63488 00:16:57.212 } 00:16:57.212 ] 00:16:57.212 }' 00:16:57.212 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.471 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.471 12:42:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:57.471 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=632 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.471 "name": "raid_bdev1", 00:16:57.471 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:57.471 "strip_size_kb": 64, 00:16:57.471 "state": "online", 00:16:57.471 "raid_level": "raid5f", 00:16:57.471 "superblock": true, 00:16:57.471 "num_base_bdevs": 4, 00:16:57.471 "num_base_bdevs_discovered": 4, 00:16:57.471 "num_base_bdevs_operational": 4, 00:16:57.471 "process": { 00:16:57.471 "type": "rebuild", 00:16:57.471 "target": "spare", 00:16:57.471 "progress": { 00:16:57.471 "blocks": 21120, 00:16:57.471 "percent": 11 00:16:57.471 } 00:16:57.471 }, 00:16:57.471 "base_bdevs_list": [ 00:16:57.471 { 00:16:57.471 "name": "spare", 00:16:57.471 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:16:57.471 "is_configured": true, 00:16:57.471 "data_offset": 2048, 00:16:57.471 "data_size": 63488 00:16:57.471 }, 00:16:57.471 { 00:16:57.471 "name": "BaseBdev2", 00:16:57.471 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:57.471 "is_configured": true, 00:16:57.471 "data_offset": 2048, 00:16:57.471 "data_size": 63488 00:16:57.471 }, 00:16:57.471 { 00:16:57.471 "name": "BaseBdev3", 00:16:57.471 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:57.471 "is_configured": true, 00:16:57.471 "data_offset": 2048, 00:16:57.471 "data_size": 63488 00:16:57.471 }, 00:16:57.471 { 00:16:57.471 "name": "BaseBdev4", 00:16:57.471 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:57.471 "is_configured": true, 00:16:57.471 "data_offset": 2048, 00:16:57.471 "data_size": 63488 00:16:57.471 } 00:16:57.471 ] 00:16:57.471 }' 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.471 12:42:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.852 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.852 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.852 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.852 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.852 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.852 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.852 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.852 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.853 "name": "raid_bdev1", 00:16:58.853 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:58.853 "strip_size_kb": 64, 00:16:58.853 "state": "online", 00:16:58.853 "raid_level": "raid5f", 00:16:58.853 "superblock": true, 00:16:58.853 "num_base_bdevs": 4, 00:16:58.853 "num_base_bdevs_discovered": 4, 00:16:58.853 "num_base_bdevs_operational": 4, 00:16:58.853 "process": { 00:16:58.853 "type": "rebuild", 00:16:58.853 "target": "spare", 00:16:58.853 "progress": { 00:16:58.853 "blocks": 44160, 00:16:58.853 "percent": 23 00:16:58.853 } 00:16:58.853 }, 00:16:58.853 "base_bdevs_list": [ 00:16:58.853 { 00:16:58.853 "name": "spare", 00:16:58.853 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:16:58.853 "is_configured": true, 00:16:58.853 "data_offset": 2048, 00:16:58.853 "data_size": 63488 00:16:58.853 }, 00:16:58.853 { 00:16:58.853 "name": "BaseBdev2", 00:16:58.853 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:58.853 "is_configured": true, 00:16:58.853 "data_offset": 2048, 00:16:58.853 "data_size": 63488 00:16:58.853 }, 00:16:58.853 { 00:16:58.853 "name": "BaseBdev3", 00:16:58.853 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:58.853 "is_configured": true, 00:16:58.853 "data_offset": 2048, 00:16:58.853 "data_size": 63488 00:16:58.853 }, 00:16:58.853 { 00:16:58.853 "name": "BaseBdev4", 00:16:58.853 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:58.853 "is_configured": true, 00:16:58.853 "data_offset": 2048, 00:16:58.853 "data_size": 63488 00:16:58.853 } 00:16:58.853 ] 00:16:58.853 }' 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.853 12:42:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.793 "name": "raid_bdev1", 00:16:59.793 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:16:59.793 "strip_size_kb": 64, 00:16:59.793 "state": "online", 00:16:59.793 "raid_level": "raid5f", 00:16:59.793 "superblock": true, 00:16:59.793 "num_base_bdevs": 4, 00:16:59.793 "num_base_bdevs_discovered": 4, 00:16:59.793 "num_base_bdevs_operational": 4, 00:16:59.793 "process": { 00:16:59.793 "type": "rebuild", 00:16:59.793 "target": "spare", 00:16:59.793 "progress": { 00:16:59.793 "blocks": 65280, 00:16:59.793 "percent": 34 00:16:59.793 } 00:16:59.793 }, 00:16:59.793 "base_bdevs_list": [ 00:16:59.793 { 00:16:59.793 "name": "spare", 00:16:59.793 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:16:59.793 "is_configured": true, 00:16:59.793 "data_offset": 2048, 00:16:59.793 "data_size": 63488 00:16:59.793 }, 00:16:59.793 { 00:16:59.793 "name": "BaseBdev2", 00:16:59.793 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:16:59.793 "is_configured": true, 00:16:59.793 "data_offset": 2048, 00:16:59.793 "data_size": 63488 00:16:59.793 }, 00:16:59.793 { 00:16:59.793 "name": "BaseBdev3", 00:16:59.793 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:16:59.793 "is_configured": true, 00:16:59.793 "data_offset": 2048, 00:16:59.793 "data_size": 63488 00:16:59.793 }, 00:16:59.793 { 00:16:59.793 "name": "BaseBdev4", 00:16:59.793 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:16:59.793 "is_configured": true, 00:16:59.793 "data_offset": 2048, 00:16:59.793 "data_size": 63488 00:16:59.793 } 00:16:59.793 ] 00:16:59.793 }' 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.793 12:42:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:01.174 "name": "raid_bdev1", 00:17:01.174 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:01.174 "strip_size_kb": 64, 00:17:01.174 "state": "online", 00:17:01.174 "raid_level": "raid5f", 00:17:01.174 "superblock": true, 00:17:01.174 "num_base_bdevs": 4, 00:17:01.174 "num_base_bdevs_discovered": 4, 00:17:01.174 "num_base_bdevs_operational": 4, 00:17:01.174 "process": { 00:17:01.174 "type": "rebuild", 00:17:01.174 "target": "spare", 00:17:01.174 "progress": { 00:17:01.174 "blocks": 86400, 00:17:01.174 "percent": 45 00:17:01.174 } 00:17:01.174 }, 00:17:01.174 "base_bdevs_list": [ 00:17:01.174 { 00:17:01.174 "name": "spare", 00:17:01.174 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:01.174 "is_configured": true, 00:17:01.174 "data_offset": 2048, 00:17:01.174 "data_size": 63488 00:17:01.174 }, 00:17:01.174 { 00:17:01.174 "name": "BaseBdev2", 00:17:01.174 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:01.174 "is_configured": true, 00:17:01.174 "data_offset": 2048, 00:17:01.174 "data_size": 63488 00:17:01.174 }, 00:17:01.174 { 00:17:01.174 "name": "BaseBdev3", 00:17:01.174 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:01.174 "is_configured": true, 00:17:01.174 "data_offset": 2048, 00:17:01.174 "data_size": 63488 00:17:01.174 }, 00:17:01.174 { 00:17:01.174 "name": "BaseBdev4", 00:17:01.174 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:01.174 "is_configured": true, 00:17:01.174 "data_offset": 2048, 00:17:01.174 "data_size": 63488 00:17:01.174 } 00:17:01.174 ] 00:17:01.174 }' 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:01.174 12:43:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.113 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.113 "name": "raid_bdev1", 00:17:02.113 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:02.113 "strip_size_kb": 64, 00:17:02.114 "state": "online", 00:17:02.114 "raid_level": "raid5f", 00:17:02.114 "superblock": true, 00:17:02.114 "num_base_bdevs": 4, 00:17:02.114 "num_base_bdevs_discovered": 4, 00:17:02.114 "num_base_bdevs_operational": 4, 00:17:02.114 "process": { 00:17:02.114 "type": "rebuild", 00:17:02.114 "target": "spare", 00:17:02.114 "progress": { 00:17:02.114 "blocks": 109440, 00:17:02.114 "percent": 57 00:17:02.114 } 00:17:02.114 }, 00:17:02.114 "base_bdevs_list": [ 00:17:02.114 { 00:17:02.114 "name": "spare", 00:17:02.114 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 2048, 00:17:02.114 "data_size": 63488 00:17:02.114 }, 00:17:02.114 { 00:17:02.114 "name": "BaseBdev2", 00:17:02.114 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 2048, 00:17:02.114 "data_size": 63488 00:17:02.114 }, 00:17:02.114 { 00:17:02.114 "name": "BaseBdev3", 00:17:02.114 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 2048, 00:17:02.114 "data_size": 63488 00:17:02.114 }, 00:17:02.114 { 00:17:02.114 "name": "BaseBdev4", 00:17:02.114 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:02.114 "is_configured": true, 00:17:02.114 "data_offset": 2048, 00:17:02.114 "data_size": 63488 00:17:02.114 } 00:17:02.114 ] 00:17:02.114 }' 00:17:02.114 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.114 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.114 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.114 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.114 12:43:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.053 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.313 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.313 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.313 "name": "raid_bdev1", 00:17:03.313 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:03.313 "strip_size_kb": 64, 00:17:03.313 "state": "online", 00:17:03.313 "raid_level": "raid5f", 00:17:03.313 "superblock": true, 00:17:03.313 "num_base_bdevs": 4, 00:17:03.313 "num_base_bdevs_discovered": 4, 00:17:03.313 "num_base_bdevs_operational": 4, 00:17:03.313 "process": { 00:17:03.313 "type": "rebuild", 00:17:03.313 "target": "spare", 00:17:03.313 "progress": { 00:17:03.313 "blocks": 130560, 00:17:03.313 "percent": 68 00:17:03.313 } 00:17:03.313 }, 00:17:03.313 "base_bdevs_list": [ 00:17:03.313 { 00:17:03.313 "name": "spare", 00:17:03.313 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:03.313 "is_configured": true, 00:17:03.313 "data_offset": 2048, 00:17:03.313 "data_size": 63488 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "name": "BaseBdev2", 00:17:03.313 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:03.313 "is_configured": true, 00:17:03.313 "data_offset": 2048, 00:17:03.313 "data_size": 63488 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "name": "BaseBdev3", 00:17:03.313 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:03.313 "is_configured": true, 00:17:03.313 "data_offset": 2048, 00:17:03.313 "data_size": 63488 00:17:03.313 }, 00:17:03.313 { 00:17:03.313 "name": "BaseBdev4", 00:17:03.313 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:03.313 "is_configured": true, 00:17:03.313 "data_offset": 2048, 00:17:03.313 "data_size": 63488 00:17:03.313 } 00:17:03.313 ] 00:17:03.313 }' 00:17:03.313 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.313 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.313 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.313 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.313 12:43:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.253 "name": "raid_bdev1", 00:17:04.253 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:04.253 "strip_size_kb": 64, 00:17:04.253 "state": "online", 00:17:04.253 "raid_level": "raid5f", 00:17:04.253 "superblock": true, 00:17:04.253 "num_base_bdevs": 4, 00:17:04.253 "num_base_bdevs_discovered": 4, 00:17:04.253 "num_base_bdevs_operational": 4, 00:17:04.253 "process": { 00:17:04.253 "type": "rebuild", 00:17:04.253 "target": "spare", 00:17:04.253 "progress": { 00:17:04.253 "blocks": 153600, 00:17:04.253 "percent": 80 00:17:04.253 } 00:17:04.253 }, 00:17:04.253 "base_bdevs_list": [ 00:17:04.253 { 00:17:04.253 "name": "spare", 00:17:04.253 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:04.253 "is_configured": true, 00:17:04.253 "data_offset": 2048, 00:17:04.253 "data_size": 63488 00:17:04.253 }, 00:17:04.253 { 00:17:04.253 "name": "BaseBdev2", 00:17:04.253 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:04.253 "is_configured": true, 00:17:04.253 "data_offset": 2048, 00:17:04.253 "data_size": 63488 00:17:04.253 }, 00:17:04.253 { 00:17:04.253 "name": "BaseBdev3", 00:17:04.253 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:04.253 "is_configured": true, 00:17:04.253 "data_offset": 2048, 00:17:04.253 "data_size": 63488 00:17:04.253 }, 00:17:04.253 { 00:17:04.253 "name": "BaseBdev4", 00:17:04.253 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:04.253 "is_configured": true, 00:17:04.253 "data_offset": 2048, 00:17:04.253 "data_size": 63488 00:17:04.253 } 00:17:04.253 ] 00:17:04.253 }' 00:17:04.253 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.513 12:43:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.513 12:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.513 12:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.513 12:43:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.452 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.453 "name": "raid_bdev1", 00:17:05.453 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:05.453 "strip_size_kb": 64, 00:17:05.453 "state": "online", 00:17:05.453 "raid_level": "raid5f", 00:17:05.453 "superblock": true, 00:17:05.453 "num_base_bdevs": 4, 00:17:05.453 "num_base_bdevs_discovered": 4, 00:17:05.453 "num_base_bdevs_operational": 4, 00:17:05.453 "process": { 00:17:05.453 "type": "rebuild", 00:17:05.453 "target": "spare", 00:17:05.453 "progress": { 00:17:05.453 "blocks": 174720, 00:17:05.453 "percent": 91 00:17:05.453 } 00:17:05.453 }, 00:17:05.453 "base_bdevs_list": [ 00:17:05.453 { 00:17:05.453 "name": "spare", 00:17:05.453 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:05.453 "is_configured": true, 00:17:05.453 "data_offset": 2048, 00:17:05.453 "data_size": 63488 00:17:05.453 }, 00:17:05.453 { 00:17:05.453 "name": "BaseBdev2", 00:17:05.453 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:05.453 "is_configured": true, 00:17:05.453 "data_offset": 2048, 00:17:05.453 "data_size": 63488 00:17:05.453 }, 00:17:05.453 { 00:17:05.453 "name": "BaseBdev3", 00:17:05.453 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:05.453 "is_configured": true, 00:17:05.453 "data_offset": 2048, 00:17:05.453 "data_size": 63488 00:17:05.453 }, 00:17:05.453 { 00:17:05.453 "name": "BaseBdev4", 00:17:05.453 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:05.453 "is_configured": true, 00:17:05.453 "data_offset": 2048, 00:17:05.453 "data_size": 63488 00:17:05.453 } 00:17:05.453 ] 00:17:05.453 }' 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.453 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.711 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.711 12:43:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:06.280 [2024-12-14 12:43:05.932619] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:06.280 [2024-12-14 12:43:05.932753] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:06.280 [2024-12-14 12:43:05.932910] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.539 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.539 "name": "raid_bdev1", 00:17:06.539 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:06.539 "strip_size_kb": 64, 00:17:06.539 "state": "online", 00:17:06.539 "raid_level": "raid5f", 00:17:06.539 "superblock": true, 00:17:06.539 "num_base_bdevs": 4, 00:17:06.539 "num_base_bdevs_discovered": 4, 00:17:06.539 "num_base_bdevs_operational": 4, 00:17:06.539 "base_bdevs_list": [ 00:17:06.539 { 00:17:06.539 "name": "spare", 00:17:06.539 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:06.539 "is_configured": true, 00:17:06.539 "data_offset": 2048, 00:17:06.540 "data_size": 63488 00:17:06.540 }, 00:17:06.540 { 00:17:06.540 "name": "BaseBdev2", 00:17:06.540 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:06.540 "is_configured": true, 00:17:06.540 "data_offset": 2048, 00:17:06.540 "data_size": 63488 00:17:06.540 }, 00:17:06.540 { 00:17:06.540 "name": "BaseBdev3", 00:17:06.540 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:06.540 "is_configured": true, 00:17:06.540 "data_offset": 2048, 00:17:06.540 "data_size": 63488 00:17:06.540 }, 00:17:06.540 { 00:17:06.540 "name": "BaseBdev4", 00:17:06.540 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:06.540 "is_configured": true, 00:17:06.540 "data_offset": 2048, 00:17:06.540 "data_size": 63488 00:17:06.540 } 00:17:06.540 ] 00:17:06.540 }' 00:17:06.540 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.799 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.799 "name": "raid_bdev1", 00:17:06.799 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:06.799 "strip_size_kb": 64, 00:17:06.799 "state": "online", 00:17:06.799 "raid_level": "raid5f", 00:17:06.799 "superblock": true, 00:17:06.799 "num_base_bdevs": 4, 00:17:06.799 "num_base_bdevs_discovered": 4, 00:17:06.799 "num_base_bdevs_operational": 4, 00:17:06.799 "base_bdevs_list": [ 00:17:06.799 { 00:17:06.799 "name": "spare", 00:17:06.799 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:06.799 "is_configured": true, 00:17:06.799 "data_offset": 2048, 00:17:06.799 "data_size": 63488 00:17:06.799 }, 00:17:06.799 { 00:17:06.799 "name": "BaseBdev2", 00:17:06.799 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:06.799 "is_configured": true, 00:17:06.799 "data_offset": 2048, 00:17:06.799 "data_size": 63488 00:17:06.799 }, 00:17:06.799 { 00:17:06.799 "name": "BaseBdev3", 00:17:06.799 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:06.799 "is_configured": true, 00:17:06.799 "data_offset": 2048, 00:17:06.799 "data_size": 63488 00:17:06.799 }, 00:17:06.799 { 00:17:06.799 "name": "BaseBdev4", 00:17:06.799 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:06.799 "is_configured": true, 00:17:06.799 "data_offset": 2048, 00:17:06.799 "data_size": 63488 00:17:06.799 } 00:17:06.799 ] 00:17:06.799 }' 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.800 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.059 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.059 "name": "raid_bdev1", 00:17:07.059 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:07.059 "strip_size_kb": 64, 00:17:07.059 "state": "online", 00:17:07.059 "raid_level": "raid5f", 00:17:07.059 "superblock": true, 00:17:07.059 "num_base_bdevs": 4, 00:17:07.059 "num_base_bdevs_discovered": 4, 00:17:07.059 "num_base_bdevs_operational": 4, 00:17:07.059 "base_bdevs_list": [ 00:17:07.059 { 00:17:07.059 "name": "spare", 00:17:07.059 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:07.059 "is_configured": true, 00:17:07.059 "data_offset": 2048, 00:17:07.059 "data_size": 63488 00:17:07.059 }, 00:17:07.059 { 00:17:07.059 "name": "BaseBdev2", 00:17:07.059 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:07.059 "is_configured": true, 00:17:07.059 "data_offset": 2048, 00:17:07.059 "data_size": 63488 00:17:07.059 }, 00:17:07.059 { 00:17:07.059 "name": "BaseBdev3", 00:17:07.059 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:07.059 "is_configured": true, 00:17:07.059 "data_offset": 2048, 00:17:07.059 "data_size": 63488 00:17:07.059 }, 00:17:07.059 { 00:17:07.059 "name": "BaseBdev4", 00:17:07.059 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:07.059 "is_configured": true, 00:17:07.059 "data_offset": 2048, 00:17:07.059 "data_size": 63488 00:17:07.059 } 00:17:07.059 ] 00:17:07.059 }' 00:17:07.059 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.059 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.319 [2024-12-14 12:43:06.912941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.319 [2024-12-14 12:43:06.913022] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.319 [2024-12-14 12:43:06.913138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.319 [2024-12-14 12:43:06.913282] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.319 [2024-12-14 12:43:06.913356] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.319 12:43:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:07.580 /dev/nbd0 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.580 1+0 records in 00:17:07.580 1+0 records out 00:17:07.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487678 s, 8.4 MB/s 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.580 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:07.840 /dev/nbd1 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.841 1+0 records in 00:17:07.841 1+0 records out 00:17:07.841 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385166 s, 10.6 MB/s 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.841 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:08.100 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:08.100 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:08.100 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:08.100 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:08.100 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:08.100 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.100 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.360 12:43:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.360 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.620 [2024-12-14 12:43:08.103154] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.620 [2024-12-14 12:43:08.103257] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.620 [2024-12-14 12:43:08.103286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:08.620 [2024-12-14 12:43:08.103296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.620 [2024-12-14 12:43:08.105550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.620 [2024-12-14 12:43:08.105589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.620 [2024-12-14 12:43:08.105687] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:08.620 [2024-12-14 12:43:08.105735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.620 [2024-12-14 12:43:08.105877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.620 [2024-12-14 12:43:08.105986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.620 [2024-12-14 12:43:08.106087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:08.620 spare 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.620 [2024-12-14 12:43:08.205993] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:08.620 [2024-12-14 12:43:08.206089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:08.620 [2024-12-14 12:43:08.206402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:08.620 [2024-12-14 12:43:08.213659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:08.620 [2024-12-14 12:43:08.213714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:08.620 [2024-12-14 12:43:08.213947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.620 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.620 "name": "raid_bdev1", 00:17:08.620 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:08.620 "strip_size_kb": 64, 00:17:08.620 "state": "online", 00:17:08.620 "raid_level": "raid5f", 00:17:08.620 "superblock": true, 00:17:08.620 "num_base_bdevs": 4, 00:17:08.620 "num_base_bdevs_discovered": 4, 00:17:08.620 "num_base_bdevs_operational": 4, 00:17:08.620 "base_bdevs_list": [ 00:17:08.620 { 00:17:08.620 "name": "spare", 00:17:08.620 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:08.620 "is_configured": true, 00:17:08.620 "data_offset": 2048, 00:17:08.620 "data_size": 63488 00:17:08.620 }, 00:17:08.620 { 00:17:08.620 "name": "BaseBdev2", 00:17:08.620 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:08.620 "is_configured": true, 00:17:08.620 "data_offset": 2048, 00:17:08.620 "data_size": 63488 00:17:08.620 }, 00:17:08.620 { 00:17:08.620 "name": "BaseBdev3", 00:17:08.620 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:08.620 "is_configured": true, 00:17:08.620 "data_offset": 2048, 00:17:08.620 "data_size": 63488 00:17:08.620 }, 00:17:08.620 { 00:17:08.620 "name": "BaseBdev4", 00:17:08.620 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:08.620 "is_configured": true, 00:17:08.620 "data_offset": 2048, 00:17:08.620 "data_size": 63488 00:17:08.620 } 00:17:08.620 ] 00:17:08.621 }' 00:17:08.621 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.621 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:09.190 "name": "raid_bdev1", 00:17:09.190 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:09.190 "strip_size_kb": 64, 00:17:09.190 "state": "online", 00:17:09.190 "raid_level": "raid5f", 00:17:09.190 "superblock": true, 00:17:09.190 "num_base_bdevs": 4, 00:17:09.190 "num_base_bdevs_discovered": 4, 00:17:09.190 "num_base_bdevs_operational": 4, 00:17:09.190 "base_bdevs_list": [ 00:17:09.190 { 00:17:09.190 "name": "spare", 00:17:09.190 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:09.190 "is_configured": true, 00:17:09.190 "data_offset": 2048, 00:17:09.190 "data_size": 63488 00:17:09.190 }, 00:17:09.190 { 00:17:09.190 "name": "BaseBdev2", 00:17:09.190 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:09.190 "is_configured": true, 00:17:09.190 "data_offset": 2048, 00:17:09.190 "data_size": 63488 00:17:09.190 }, 00:17:09.190 { 00:17:09.190 "name": "BaseBdev3", 00:17:09.190 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:09.190 "is_configured": true, 00:17:09.190 "data_offset": 2048, 00:17:09.190 "data_size": 63488 00:17:09.190 }, 00:17:09.190 { 00:17:09.190 "name": "BaseBdev4", 00:17:09.190 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:09.190 "is_configured": true, 00:17:09.190 "data_offset": 2048, 00:17:09.190 "data_size": 63488 00:17:09.190 } 00:17:09.190 ] 00:17:09.190 }' 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:09.190 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.191 [2024-12-14 12:43:08.893220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.191 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.450 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.450 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.450 "name": "raid_bdev1", 00:17:09.450 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:09.450 "strip_size_kb": 64, 00:17:09.450 "state": "online", 00:17:09.450 "raid_level": "raid5f", 00:17:09.450 "superblock": true, 00:17:09.450 "num_base_bdevs": 4, 00:17:09.450 "num_base_bdevs_discovered": 3, 00:17:09.450 "num_base_bdevs_operational": 3, 00:17:09.450 "base_bdevs_list": [ 00:17:09.450 { 00:17:09.450 "name": null, 00:17:09.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.450 "is_configured": false, 00:17:09.450 "data_offset": 0, 00:17:09.450 "data_size": 63488 00:17:09.450 }, 00:17:09.450 { 00:17:09.450 "name": "BaseBdev2", 00:17:09.450 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:09.450 "is_configured": true, 00:17:09.450 "data_offset": 2048, 00:17:09.450 "data_size": 63488 00:17:09.450 }, 00:17:09.450 { 00:17:09.450 "name": "BaseBdev3", 00:17:09.450 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:09.450 "is_configured": true, 00:17:09.450 "data_offset": 2048, 00:17:09.450 "data_size": 63488 00:17:09.450 }, 00:17:09.450 { 00:17:09.450 "name": "BaseBdev4", 00:17:09.450 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:09.450 "is_configured": true, 00:17:09.450 "data_offset": 2048, 00:17:09.450 "data_size": 63488 00:17:09.450 } 00:17:09.450 ] 00:17:09.450 }' 00:17:09.450 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.450 12:43:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.710 12:43:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.710 12:43:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.710 12:43:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.710 [2024-12-14 12:43:09.300536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.710 [2024-12-14 12:43:09.300723] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:09.710 [2024-12-14 12:43:09.300741] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:09.710 [2024-12-14 12:43:09.300780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.710 [2024-12-14 12:43:09.315377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:09.710 12:43:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.710 12:43:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:09.710 [2024-12-14 12:43:09.324310] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.650 "name": "raid_bdev1", 00:17:10.650 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:10.650 "strip_size_kb": 64, 00:17:10.650 "state": "online", 00:17:10.650 "raid_level": "raid5f", 00:17:10.650 "superblock": true, 00:17:10.650 "num_base_bdevs": 4, 00:17:10.650 "num_base_bdevs_discovered": 4, 00:17:10.650 "num_base_bdevs_operational": 4, 00:17:10.650 "process": { 00:17:10.650 "type": "rebuild", 00:17:10.650 "target": "spare", 00:17:10.650 "progress": { 00:17:10.650 "blocks": 19200, 00:17:10.650 "percent": 10 00:17:10.650 } 00:17:10.650 }, 00:17:10.650 "base_bdevs_list": [ 00:17:10.650 { 00:17:10.650 "name": "spare", 00:17:10.650 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:10.650 "is_configured": true, 00:17:10.650 "data_offset": 2048, 00:17:10.650 "data_size": 63488 00:17:10.650 }, 00:17:10.650 { 00:17:10.650 "name": "BaseBdev2", 00:17:10.650 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:10.650 "is_configured": true, 00:17:10.650 "data_offset": 2048, 00:17:10.650 "data_size": 63488 00:17:10.650 }, 00:17:10.650 { 00:17:10.650 "name": "BaseBdev3", 00:17:10.650 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:10.650 "is_configured": true, 00:17:10.650 "data_offset": 2048, 00:17:10.650 "data_size": 63488 00:17:10.650 }, 00:17:10.650 { 00:17:10.650 "name": "BaseBdev4", 00:17:10.650 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:10.650 "is_configured": true, 00:17:10.650 "data_offset": 2048, 00:17:10.650 "data_size": 63488 00:17:10.650 } 00:17:10.650 ] 00:17:10.650 }' 00:17:10.650 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.909 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.909 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.909 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.909 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.910 [2024-12-14 12:43:10.479498] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.910 [2024-12-14 12:43:10.531506] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.910 [2024-12-14 12:43:10.531573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.910 [2024-12-14 12:43:10.531591] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.910 [2024-12-14 12:43:10.531600] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.910 "name": "raid_bdev1", 00:17:10.910 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:10.910 "strip_size_kb": 64, 00:17:10.910 "state": "online", 00:17:10.910 "raid_level": "raid5f", 00:17:10.910 "superblock": true, 00:17:10.910 "num_base_bdevs": 4, 00:17:10.910 "num_base_bdevs_discovered": 3, 00:17:10.910 "num_base_bdevs_operational": 3, 00:17:10.910 "base_bdevs_list": [ 00:17:10.910 { 00:17:10.910 "name": null, 00:17:10.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.910 "is_configured": false, 00:17:10.910 "data_offset": 0, 00:17:10.910 "data_size": 63488 00:17:10.910 }, 00:17:10.910 { 00:17:10.910 "name": "BaseBdev2", 00:17:10.910 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:10.910 "is_configured": true, 00:17:10.910 "data_offset": 2048, 00:17:10.910 "data_size": 63488 00:17:10.910 }, 00:17:10.910 { 00:17:10.910 "name": "BaseBdev3", 00:17:10.910 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:10.910 "is_configured": true, 00:17:10.910 "data_offset": 2048, 00:17:10.910 "data_size": 63488 00:17:10.910 }, 00:17:10.910 { 00:17:10.910 "name": "BaseBdev4", 00:17:10.910 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:10.910 "is_configured": true, 00:17:10.910 "data_offset": 2048, 00:17:10.910 "data_size": 63488 00:17:10.910 } 00:17:10.910 ] 00:17:10.910 }' 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.910 12:43:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.479 12:43:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.479 12:43:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.479 12:43:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.479 [2024-12-14 12:43:11.023405] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.479 [2024-12-14 12:43:11.023548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.479 [2024-12-14 12:43:11.023598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:11.479 [2024-12-14 12:43:11.023637] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.479 [2024-12-14 12:43:11.024218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.479 [2024-12-14 12:43:11.024297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.479 [2024-12-14 12:43:11.024449] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:11.479 [2024-12-14 12:43:11.024499] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:11.479 [2024-12-14 12:43:11.024551] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:11.479 [2024-12-14 12:43:11.024637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.479 [2024-12-14 12:43:11.039844] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:11.479 spare 00:17:11.479 12:43:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.479 12:43:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:11.479 [2024-12-14 12:43:11.049181] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.418 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.418 "name": "raid_bdev1", 00:17:12.418 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:12.419 "strip_size_kb": 64, 00:17:12.419 "state": "online", 00:17:12.419 "raid_level": "raid5f", 00:17:12.419 "superblock": true, 00:17:12.419 "num_base_bdevs": 4, 00:17:12.419 "num_base_bdevs_discovered": 4, 00:17:12.419 "num_base_bdevs_operational": 4, 00:17:12.419 "process": { 00:17:12.419 "type": "rebuild", 00:17:12.419 "target": "spare", 00:17:12.419 "progress": { 00:17:12.419 "blocks": 19200, 00:17:12.419 "percent": 10 00:17:12.419 } 00:17:12.419 }, 00:17:12.419 "base_bdevs_list": [ 00:17:12.419 { 00:17:12.419 "name": "spare", 00:17:12.419 "uuid": "169135ee-0d64-5273-914e-64093dc05adc", 00:17:12.419 "is_configured": true, 00:17:12.419 "data_offset": 2048, 00:17:12.419 "data_size": 63488 00:17:12.419 }, 00:17:12.419 { 00:17:12.419 "name": "BaseBdev2", 00:17:12.419 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:12.419 "is_configured": true, 00:17:12.419 "data_offset": 2048, 00:17:12.419 "data_size": 63488 00:17:12.419 }, 00:17:12.419 { 00:17:12.419 "name": "BaseBdev3", 00:17:12.419 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:12.419 "is_configured": true, 00:17:12.419 "data_offset": 2048, 00:17:12.419 "data_size": 63488 00:17:12.419 }, 00:17:12.419 { 00:17:12.419 "name": "BaseBdev4", 00:17:12.419 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:12.419 "is_configured": true, 00:17:12.419 "data_offset": 2048, 00:17:12.419 "data_size": 63488 00:17:12.419 } 00:17:12.419 ] 00:17:12.419 }' 00:17:12.419 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.419 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.419 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.679 [2024-12-14 12:43:12.208263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.679 [2024-12-14 12:43:12.256225] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:12.679 [2024-12-14 12:43:12.256280] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.679 [2024-12-14 12:43:12.256299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.679 [2024-12-14 12:43:12.256307] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.679 "name": "raid_bdev1", 00:17:12.679 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:12.679 "strip_size_kb": 64, 00:17:12.679 "state": "online", 00:17:12.679 "raid_level": "raid5f", 00:17:12.679 "superblock": true, 00:17:12.679 "num_base_bdevs": 4, 00:17:12.679 "num_base_bdevs_discovered": 3, 00:17:12.679 "num_base_bdevs_operational": 3, 00:17:12.679 "base_bdevs_list": [ 00:17:12.679 { 00:17:12.679 "name": null, 00:17:12.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.679 "is_configured": false, 00:17:12.679 "data_offset": 0, 00:17:12.679 "data_size": 63488 00:17:12.679 }, 00:17:12.679 { 00:17:12.679 "name": "BaseBdev2", 00:17:12.679 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:12.679 "is_configured": true, 00:17:12.679 "data_offset": 2048, 00:17:12.679 "data_size": 63488 00:17:12.679 }, 00:17:12.679 { 00:17:12.679 "name": "BaseBdev3", 00:17:12.679 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:12.679 "is_configured": true, 00:17:12.679 "data_offset": 2048, 00:17:12.679 "data_size": 63488 00:17:12.679 }, 00:17:12.679 { 00:17:12.679 "name": "BaseBdev4", 00:17:12.679 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:12.679 "is_configured": true, 00:17:12.679 "data_offset": 2048, 00:17:12.679 "data_size": 63488 00:17:12.679 } 00:17:12.679 ] 00:17:12.679 }' 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.679 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:13.249 "name": "raid_bdev1", 00:17:13.249 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:13.249 "strip_size_kb": 64, 00:17:13.249 "state": "online", 00:17:13.249 "raid_level": "raid5f", 00:17:13.249 "superblock": true, 00:17:13.249 "num_base_bdevs": 4, 00:17:13.249 "num_base_bdevs_discovered": 3, 00:17:13.249 "num_base_bdevs_operational": 3, 00:17:13.249 "base_bdevs_list": [ 00:17:13.249 { 00:17:13.249 "name": null, 00:17:13.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.249 "is_configured": false, 00:17:13.249 "data_offset": 0, 00:17:13.249 "data_size": 63488 00:17:13.249 }, 00:17:13.249 { 00:17:13.249 "name": "BaseBdev2", 00:17:13.249 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:13.249 "is_configured": true, 00:17:13.249 "data_offset": 2048, 00:17:13.249 "data_size": 63488 00:17:13.249 }, 00:17:13.249 { 00:17:13.249 "name": "BaseBdev3", 00:17:13.249 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:13.249 "is_configured": true, 00:17:13.249 "data_offset": 2048, 00:17:13.249 "data_size": 63488 00:17:13.249 }, 00:17:13.249 { 00:17:13.249 "name": "BaseBdev4", 00:17:13.249 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:13.249 "is_configured": true, 00:17:13.249 "data_offset": 2048, 00:17:13.249 "data_size": 63488 00:17:13.249 } 00:17:13.249 ] 00:17:13.249 }' 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.249 [2024-12-14 12:43:12.949474] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:13.249 [2024-12-14 12:43:12.949531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.249 [2024-12-14 12:43:12.949554] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:13.249 [2024-12-14 12:43:12.949564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.249 [2024-12-14 12:43:12.950033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.249 [2024-12-14 12:43:12.950069] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:13.249 [2024-12-14 12:43:12.950153] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:13.249 [2024-12-14 12:43:12.950204] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:13.249 [2024-12-14 12:43:12.950217] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:13.249 [2024-12-14 12:43:12.950227] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:13.249 BaseBdev1 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.249 12:43:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.632 12:43:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.632 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.632 "name": "raid_bdev1", 00:17:14.632 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:14.632 "strip_size_kb": 64, 00:17:14.632 "state": "online", 00:17:14.632 "raid_level": "raid5f", 00:17:14.632 "superblock": true, 00:17:14.632 "num_base_bdevs": 4, 00:17:14.632 "num_base_bdevs_discovered": 3, 00:17:14.632 "num_base_bdevs_operational": 3, 00:17:14.632 "base_bdevs_list": [ 00:17:14.632 { 00:17:14.632 "name": null, 00:17:14.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.632 "is_configured": false, 00:17:14.632 "data_offset": 0, 00:17:14.632 "data_size": 63488 00:17:14.632 }, 00:17:14.632 { 00:17:14.632 "name": "BaseBdev2", 00:17:14.632 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:14.632 "is_configured": true, 00:17:14.632 "data_offset": 2048, 00:17:14.632 "data_size": 63488 00:17:14.632 }, 00:17:14.632 { 00:17:14.632 "name": "BaseBdev3", 00:17:14.632 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:14.632 "is_configured": true, 00:17:14.632 "data_offset": 2048, 00:17:14.632 "data_size": 63488 00:17:14.632 }, 00:17:14.632 { 00:17:14.632 "name": "BaseBdev4", 00:17:14.632 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:14.632 "is_configured": true, 00:17:14.632 "data_offset": 2048, 00:17:14.632 "data_size": 63488 00:17:14.632 } 00:17:14.632 ] 00:17:14.632 }' 00:17:14.632 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.632 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.892 "name": "raid_bdev1", 00:17:14.892 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:14.892 "strip_size_kb": 64, 00:17:14.892 "state": "online", 00:17:14.892 "raid_level": "raid5f", 00:17:14.892 "superblock": true, 00:17:14.892 "num_base_bdevs": 4, 00:17:14.892 "num_base_bdevs_discovered": 3, 00:17:14.892 "num_base_bdevs_operational": 3, 00:17:14.892 "base_bdevs_list": [ 00:17:14.892 { 00:17:14.892 "name": null, 00:17:14.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.892 "is_configured": false, 00:17:14.892 "data_offset": 0, 00:17:14.892 "data_size": 63488 00:17:14.892 }, 00:17:14.892 { 00:17:14.892 "name": "BaseBdev2", 00:17:14.892 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:14.892 "is_configured": true, 00:17:14.892 "data_offset": 2048, 00:17:14.892 "data_size": 63488 00:17:14.892 }, 00:17:14.892 { 00:17:14.892 "name": "BaseBdev3", 00:17:14.892 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:14.892 "is_configured": true, 00:17:14.892 "data_offset": 2048, 00:17:14.892 "data_size": 63488 00:17:14.892 }, 00:17:14.892 { 00:17:14.892 "name": "BaseBdev4", 00:17:14.892 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:14.892 "is_configured": true, 00:17:14.892 "data_offset": 2048, 00:17:14.892 "data_size": 63488 00:17:14.892 } 00:17:14.892 ] 00:17:14.892 }' 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.892 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.892 [2024-12-14 12:43:14.546852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.892 [2024-12-14 12:43:14.547140] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.892 [2024-12-14 12:43:14.547221] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:14.892 request: 00:17:14.892 { 00:17:14.892 "base_bdev": "BaseBdev1", 00:17:14.892 "raid_bdev": "raid_bdev1", 00:17:14.892 "method": "bdev_raid_add_base_bdev", 00:17:14.892 "req_id": 1 00:17:14.892 } 00:17:14.892 Got JSON-RPC error response 00:17:14.892 response: 00:17:14.893 { 00:17:14.893 "code": -22, 00:17:14.893 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:14.893 } 00:17:14.893 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:14.893 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:14.893 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.893 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.893 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.893 12:43:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:15.832 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.833 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.091 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.091 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.091 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.091 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.091 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.091 "name": "raid_bdev1", 00:17:16.091 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:16.091 "strip_size_kb": 64, 00:17:16.091 "state": "online", 00:17:16.091 "raid_level": "raid5f", 00:17:16.091 "superblock": true, 00:17:16.091 "num_base_bdevs": 4, 00:17:16.091 "num_base_bdevs_discovered": 3, 00:17:16.092 "num_base_bdevs_operational": 3, 00:17:16.092 "base_bdevs_list": [ 00:17:16.092 { 00:17:16.092 "name": null, 00:17:16.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.092 "is_configured": false, 00:17:16.092 "data_offset": 0, 00:17:16.092 "data_size": 63488 00:17:16.092 }, 00:17:16.092 { 00:17:16.092 "name": "BaseBdev2", 00:17:16.092 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:16.092 "is_configured": true, 00:17:16.092 "data_offset": 2048, 00:17:16.092 "data_size": 63488 00:17:16.092 }, 00:17:16.092 { 00:17:16.092 "name": "BaseBdev3", 00:17:16.092 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:16.092 "is_configured": true, 00:17:16.092 "data_offset": 2048, 00:17:16.092 "data_size": 63488 00:17:16.092 }, 00:17:16.092 { 00:17:16.092 "name": "BaseBdev4", 00:17:16.092 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:16.092 "is_configured": true, 00:17:16.092 "data_offset": 2048, 00:17:16.092 "data_size": 63488 00:17:16.092 } 00:17:16.092 ] 00:17:16.092 }' 00:17:16.092 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.092 12:43:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.351 "name": "raid_bdev1", 00:17:16.351 "uuid": "ca15a935-a36f-47e0-b786-9de7c9cde437", 00:17:16.351 "strip_size_kb": 64, 00:17:16.351 "state": "online", 00:17:16.351 "raid_level": "raid5f", 00:17:16.351 "superblock": true, 00:17:16.351 "num_base_bdevs": 4, 00:17:16.351 "num_base_bdevs_discovered": 3, 00:17:16.351 "num_base_bdevs_operational": 3, 00:17:16.351 "base_bdevs_list": [ 00:17:16.351 { 00:17:16.351 "name": null, 00:17:16.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.351 "is_configured": false, 00:17:16.351 "data_offset": 0, 00:17:16.351 "data_size": 63488 00:17:16.351 }, 00:17:16.351 { 00:17:16.351 "name": "BaseBdev2", 00:17:16.351 "uuid": "4c35e03e-53df-5a5e-8fc5-3433394dbceb", 00:17:16.351 "is_configured": true, 00:17:16.351 "data_offset": 2048, 00:17:16.351 "data_size": 63488 00:17:16.351 }, 00:17:16.351 { 00:17:16.351 "name": "BaseBdev3", 00:17:16.351 "uuid": "75f7200c-21f9-52e1-822b-5056833978d8", 00:17:16.351 "is_configured": true, 00:17:16.351 "data_offset": 2048, 00:17:16.351 "data_size": 63488 00:17:16.351 }, 00:17:16.351 { 00:17:16.351 "name": "BaseBdev4", 00:17:16.351 "uuid": "e69c8a6a-4520-5307-a657-fa8ba0364b29", 00:17:16.351 "is_configured": true, 00:17:16.351 "data_offset": 2048, 00:17:16.351 "data_size": 63488 00:17:16.351 } 00:17:16.351 ] 00:17:16.351 }' 00:17:16.351 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 86856 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 86856 ']' 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 86856 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86856 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.612 killing process with pid 86856 00:17:16.612 Received shutdown signal, test time was about 60.000000 seconds 00:17:16.612 00:17:16.612 Latency(us) 00:17:16.612 [2024-12-14T12:43:16.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.612 [2024-12-14T12:43:16.350Z] =================================================================================================================== 00:17:16.612 [2024-12-14T12:43:16.350Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86856' 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 86856 00:17:16.612 [2024-12-14 12:43:16.209703] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.612 [2024-12-14 12:43:16.209853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.612 12:43:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 86856 00:17:16.612 [2024-12-14 12:43:16.209941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.612 [2024-12-14 12:43:16.209955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:17.182 [2024-12-14 12:43:16.684265] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.122 12:43:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:18.122 00:17:18.122 real 0m26.882s 00:17:18.122 user 0m33.857s 00:17:18.122 sys 0m2.890s 00:17:18.122 12:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.122 ************************************ 00:17:18.122 END TEST raid5f_rebuild_test_sb 00:17:18.122 ************************************ 00:17:18.122 12:43:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.122 12:43:17 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:18.122 12:43:17 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:18.122 12:43:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:18.122 12:43:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.122 12:43:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.122 ************************************ 00:17:18.122 START TEST raid_state_function_test_sb_4k 00:17:18.122 ************************************ 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=87669 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:18.122 Process raid pid: 87669 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87669' 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 87669 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 87669 ']' 00:17:18.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.122 12:43:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.382 [2024-12-14 12:43:17.930300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:18.382 [2024-12-14 12:43:17.930449] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.382 [2024-12-14 12:43:18.106620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.642 [2024-12-14 12:43:18.214618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.902 [2024-12-14 12:43:18.411371] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.902 [2024-12-14 12:43:18.411407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.171 [2024-12-14 12:43:18.752153] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.171 [2024-12-14 12:43:18.752201] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.171 [2024-12-14 12:43:18.752212] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.171 [2024-12-14 12:43:18.752221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.171 "name": "Existed_Raid", 00:17:19.171 "uuid": "5f633f2b-9216-40eb-80a7-f8652e609dde", 00:17:19.171 "strip_size_kb": 0, 00:17:19.171 "state": "configuring", 00:17:19.171 "raid_level": "raid1", 00:17:19.171 "superblock": true, 00:17:19.171 "num_base_bdevs": 2, 00:17:19.171 "num_base_bdevs_discovered": 0, 00:17:19.171 "num_base_bdevs_operational": 2, 00:17:19.171 "base_bdevs_list": [ 00:17:19.171 { 00:17:19.171 "name": "BaseBdev1", 00:17:19.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.171 "is_configured": false, 00:17:19.171 "data_offset": 0, 00:17:19.171 "data_size": 0 00:17:19.171 }, 00:17:19.171 { 00:17:19.171 "name": "BaseBdev2", 00:17:19.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.171 "is_configured": false, 00:17:19.171 "data_offset": 0, 00:17:19.171 "data_size": 0 00:17:19.171 } 00:17:19.171 ] 00:17:19.171 }' 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.171 12:43:18 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.448 [2024-12-14 12:43:19.159381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.448 [2024-12-14 12:43:19.159457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.448 [2024-12-14 12:43:19.171357] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.448 [2024-12-14 12:43:19.171431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.448 [2024-12-14 12:43:19.171458] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.448 [2024-12-14 12:43:19.171482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.448 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.729 [2024-12-14 12:43:19.216489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.729 BaseBdev1 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.729 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.729 [ 00:17:19.729 { 00:17:19.729 "name": "BaseBdev1", 00:17:19.729 "aliases": [ 00:17:19.729 "feeeb694-35c4-423a-9f64-b86721a0a018" 00:17:19.729 ], 00:17:19.729 "product_name": "Malloc disk", 00:17:19.729 "block_size": 4096, 00:17:19.729 "num_blocks": 8192, 00:17:19.729 "uuid": "feeeb694-35c4-423a-9f64-b86721a0a018", 00:17:19.729 "assigned_rate_limits": { 00:17:19.729 "rw_ios_per_sec": 0, 00:17:19.729 "rw_mbytes_per_sec": 0, 00:17:19.729 "r_mbytes_per_sec": 0, 00:17:19.729 "w_mbytes_per_sec": 0 00:17:19.729 }, 00:17:19.730 "claimed": true, 00:17:19.730 "claim_type": "exclusive_write", 00:17:19.730 "zoned": false, 00:17:19.730 "supported_io_types": { 00:17:19.730 "read": true, 00:17:19.730 "write": true, 00:17:19.730 "unmap": true, 00:17:19.730 "flush": true, 00:17:19.730 "reset": true, 00:17:19.730 "nvme_admin": false, 00:17:19.730 "nvme_io": false, 00:17:19.730 "nvme_io_md": false, 00:17:19.730 "write_zeroes": true, 00:17:19.730 "zcopy": true, 00:17:19.730 "get_zone_info": false, 00:17:19.730 "zone_management": false, 00:17:19.730 "zone_append": false, 00:17:19.730 "compare": false, 00:17:19.730 "compare_and_write": false, 00:17:19.730 "abort": true, 00:17:19.730 "seek_hole": false, 00:17:19.730 "seek_data": false, 00:17:19.730 "copy": true, 00:17:19.730 "nvme_iov_md": false 00:17:19.730 }, 00:17:19.730 "memory_domains": [ 00:17:19.730 { 00:17:19.730 "dma_device_id": "system", 00:17:19.730 "dma_device_type": 1 00:17:19.730 }, 00:17:19.730 { 00:17:19.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.730 "dma_device_type": 2 00:17:19.730 } 00:17:19.730 ], 00:17:19.730 "driver_specific": {} 00:17:19.730 } 00:17:19.730 ] 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.730 "name": "Existed_Raid", 00:17:19.730 "uuid": "311e57d4-3f57-4b8d-a28d-73f22d662abd", 00:17:19.730 "strip_size_kb": 0, 00:17:19.730 "state": "configuring", 00:17:19.730 "raid_level": "raid1", 00:17:19.730 "superblock": true, 00:17:19.730 "num_base_bdevs": 2, 00:17:19.730 "num_base_bdevs_discovered": 1, 00:17:19.730 "num_base_bdevs_operational": 2, 00:17:19.730 "base_bdevs_list": [ 00:17:19.730 { 00:17:19.730 "name": "BaseBdev1", 00:17:19.730 "uuid": "feeeb694-35c4-423a-9f64-b86721a0a018", 00:17:19.730 "is_configured": true, 00:17:19.730 "data_offset": 256, 00:17:19.730 "data_size": 7936 00:17:19.730 }, 00:17:19.730 { 00:17:19.730 "name": "BaseBdev2", 00:17:19.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.730 "is_configured": false, 00:17:19.730 "data_offset": 0, 00:17:19.730 "data_size": 0 00:17:19.730 } 00:17:19.730 ] 00:17:19.730 }' 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.730 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.003 [2024-12-14 12:43:19.715722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:20.003 [2024-12-14 12:43:19.715775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.003 [2024-12-14 12:43:19.727756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:20.003 [2024-12-14 12:43:19.729611] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:20.003 [2024-12-14 12:43:19.729689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.003 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.263 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.263 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.263 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.263 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.263 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.263 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.263 "name": "Existed_Raid", 00:17:20.263 "uuid": "99e0f548-20ed-4a35-8f1a-2d2812b435f8", 00:17:20.263 "strip_size_kb": 0, 00:17:20.263 "state": "configuring", 00:17:20.263 "raid_level": "raid1", 00:17:20.263 "superblock": true, 00:17:20.263 "num_base_bdevs": 2, 00:17:20.263 "num_base_bdevs_discovered": 1, 00:17:20.263 "num_base_bdevs_operational": 2, 00:17:20.263 "base_bdevs_list": [ 00:17:20.263 { 00:17:20.263 "name": "BaseBdev1", 00:17:20.263 "uuid": "feeeb694-35c4-423a-9f64-b86721a0a018", 00:17:20.263 "is_configured": true, 00:17:20.263 "data_offset": 256, 00:17:20.263 "data_size": 7936 00:17:20.263 }, 00:17:20.263 { 00:17:20.263 "name": "BaseBdev2", 00:17:20.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.263 "is_configured": false, 00:17:20.263 "data_offset": 0, 00:17:20.263 "data_size": 0 00:17:20.263 } 00:17:20.263 ] 00:17:20.263 }' 00:17:20.263 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.263 12:43:19 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.522 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:20.522 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.523 [2024-12-14 12:43:20.172556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.523 [2024-12-14 12:43:20.172785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:20.523 [2024-12-14 12:43:20.172801] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.523 [2024-12-14 12:43:20.173034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:20.523 [2024-12-14 12:43:20.173234] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:20.523 [2024-12-14 12:43:20.173248] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:20.523 [2024-12-14 12:43:20.173405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.523 BaseBdev2 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.523 [ 00:17:20.523 { 00:17:20.523 "name": "BaseBdev2", 00:17:20.523 "aliases": [ 00:17:20.523 "8a8997b0-4a45-4962-b290-6ac820b1e5e6" 00:17:20.523 ], 00:17:20.523 "product_name": "Malloc disk", 00:17:20.523 "block_size": 4096, 00:17:20.523 "num_blocks": 8192, 00:17:20.523 "uuid": "8a8997b0-4a45-4962-b290-6ac820b1e5e6", 00:17:20.523 "assigned_rate_limits": { 00:17:20.523 "rw_ios_per_sec": 0, 00:17:20.523 "rw_mbytes_per_sec": 0, 00:17:20.523 "r_mbytes_per_sec": 0, 00:17:20.523 "w_mbytes_per_sec": 0 00:17:20.523 }, 00:17:20.523 "claimed": true, 00:17:20.523 "claim_type": "exclusive_write", 00:17:20.523 "zoned": false, 00:17:20.523 "supported_io_types": { 00:17:20.523 "read": true, 00:17:20.523 "write": true, 00:17:20.523 "unmap": true, 00:17:20.523 "flush": true, 00:17:20.523 "reset": true, 00:17:20.523 "nvme_admin": false, 00:17:20.523 "nvme_io": false, 00:17:20.523 "nvme_io_md": false, 00:17:20.523 "write_zeroes": true, 00:17:20.523 "zcopy": true, 00:17:20.523 "get_zone_info": false, 00:17:20.523 "zone_management": false, 00:17:20.523 "zone_append": false, 00:17:20.523 "compare": false, 00:17:20.523 "compare_and_write": false, 00:17:20.523 "abort": true, 00:17:20.523 "seek_hole": false, 00:17:20.523 "seek_data": false, 00:17:20.523 "copy": true, 00:17:20.523 "nvme_iov_md": false 00:17:20.523 }, 00:17:20.523 "memory_domains": [ 00:17:20.523 { 00:17:20.523 "dma_device_id": "system", 00:17:20.523 "dma_device_type": 1 00:17:20.523 }, 00:17:20.523 { 00:17:20.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.523 "dma_device_type": 2 00:17:20.523 } 00:17:20.523 ], 00:17:20.523 "driver_specific": {} 00:17:20.523 } 00:17:20.523 ] 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.523 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.782 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.782 "name": "Existed_Raid", 00:17:20.782 "uuid": "99e0f548-20ed-4a35-8f1a-2d2812b435f8", 00:17:20.782 "strip_size_kb": 0, 00:17:20.782 "state": "online", 00:17:20.782 "raid_level": "raid1", 00:17:20.782 "superblock": true, 00:17:20.782 "num_base_bdevs": 2, 00:17:20.782 "num_base_bdevs_discovered": 2, 00:17:20.782 "num_base_bdevs_operational": 2, 00:17:20.782 "base_bdevs_list": [ 00:17:20.782 { 00:17:20.782 "name": "BaseBdev1", 00:17:20.782 "uuid": "feeeb694-35c4-423a-9f64-b86721a0a018", 00:17:20.782 "is_configured": true, 00:17:20.782 "data_offset": 256, 00:17:20.782 "data_size": 7936 00:17:20.782 }, 00:17:20.782 { 00:17:20.782 "name": "BaseBdev2", 00:17:20.782 "uuid": "8a8997b0-4a45-4962-b290-6ac820b1e5e6", 00:17:20.782 "is_configured": true, 00:17:20.782 "data_offset": 256, 00:17:20.782 "data_size": 7936 00:17:20.782 } 00:17:20.782 ] 00:17:20.782 }' 00:17:20.782 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.782 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.042 [2024-12-14 12:43:20.624173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:21.042 "name": "Existed_Raid", 00:17:21.042 "aliases": [ 00:17:21.042 "99e0f548-20ed-4a35-8f1a-2d2812b435f8" 00:17:21.042 ], 00:17:21.042 "product_name": "Raid Volume", 00:17:21.042 "block_size": 4096, 00:17:21.042 "num_blocks": 7936, 00:17:21.042 "uuid": "99e0f548-20ed-4a35-8f1a-2d2812b435f8", 00:17:21.042 "assigned_rate_limits": { 00:17:21.042 "rw_ios_per_sec": 0, 00:17:21.042 "rw_mbytes_per_sec": 0, 00:17:21.042 "r_mbytes_per_sec": 0, 00:17:21.042 "w_mbytes_per_sec": 0 00:17:21.042 }, 00:17:21.042 "claimed": false, 00:17:21.042 "zoned": false, 00:17:21.042 "supported_io_types": { 00:17:21.042 "read": true, 00:17:21.042 "write": true, 00:17:21.042 "unmap": false, 00:17:21.042 "flush": false, 00:17:21.042 "reset": true, 00:17:21.042 "nvme_admin": false, 00:17:21.042 "nvme_io": false, 00:17:21.042 "nvme_io_md": false, 00:17:21.042 "write_zeroes": true, 00:17:21.042 "zcopy": false, 00:17:21.042 "get_zone_info": false, 00:17:21.042 "zone_management": false, 00:17:21.042 "zone_append": false, 00:17:21.042 "compare": false, 00:17:21.042 "compare_and_write": false, 00:17:21.042 "abort": false, 00:17:21.042 "seek_hole": false, 00:17:21.042 "seek_data": false, 00:17:21.042 "copy": false, 00:17:21.042 "nvme_iov_md": false 00:17:21.042 }, 00:17:21.042 "memory_domains": [ 00:17:21.042 { 00:17:21.042 "dma_device_id": "system", 00:17:21.042 "dma_device_type": 1 00:17:21.042 }, 00:17:21.042 { 00:17:21.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.042 "dma_device_type": 2 00:17:21.042 }, 00:17:21.042 { 00:17:21.042 "dma_device_id": "system", 00:17:21.042 "dma_device_type": 1 00:17:21.042 }, 00:17:21.042 { 00:17:21.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.042 "dma_device_type": 2 00:17:21.042 } 00:17:21.042 ], 00:17:21.042 "driver_specific": { 00:17:21.042 "raid": { 00:17:21.042 "uuid": "99e0f548-20ed-4a35-8f1a-2d2812b435f8", 00:17:21.042 "strip_size_kb": 0, 00:17:21.042 "state": "online", 00:17:21.042 "raid_level": "raid1", 00:17:21.042 "superblock": true, 00:17:21.042 "num_base_bdevs": 2, 00:17:21.042 "num_base_bdevs_discovered": 2, 00:17:21.042 "num_base_bdevs_operational": 2, 00:17:21.042 "base_bdevs_list": [ 00:17:21.042 { 00:17:21.042 "name": "BaseBdev1", 00:17:21.042 "uuid": "feeeb694-35c4-423a-9f64-b86721a0a018", 00:17:21.042 "is_configured": true, 00:17:21.042 "data_offset": 256, 00:17:21.042 "data_size": 7936 00:17:21.042 }, 00:17:21.042 { 00:17:21.042 "name": "BaseBdev2", 00:17:21.042 "uuid": "8a8997b0-4a45-4962-b290-6ac820b1e5e6", 00:17:21.042 "is_configured": true, 00:17:21.042 "data_offset": 256, 00:17:21.042 "data_size": 7936 00:17:21.042 } 00:17:21.042 ] 00:17:21.042 } 00:17:21.042 } 00:17:21.042 }' 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:21.042 BaseBdev2' 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.042 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.303 [2024-12-14 12:43:20.851539] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.303 "name": "Existed_Raid", 00:17:21.303 "uuid": "99e0f548-20ed-4a35-8f1a-2d2812b435f8", 00:17:21.303 "strip_size_kb": 0, 00:17:21.303 "state": "online", 00:17:21.303 "raid_level": "raid1", 00:17:21.303 "superblock": true, 00:17:21.303 "num_base_bdevs": 2, 00:17:21.303 "num_base_bdevs_discovered": 1, 00:17:21.303 "num_base_bdevs_operational": 1, 00:17:21.303 "base_bdevs_list": [ 00:17:21.303 { 00:17:21.303 "name": null, 00:17:21.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.303 "is_configured": false, 00:17:21.303 "data_offset": 0, 00:17:21.303 "data_size": 7936 00:17:21.303 }, 00:17:21.303 { 00:17:21.303 "name": "BaseBdev2", 00:17:21.303 "uuid": "8a8997b0-4a45-4962-b290-6ac820b1e5e6", 00:17:21.303 "is_configured": true, 00:17:21.303 "data_offset": 256, 00:17:21.303 "data_size": 7936 00:17:21.303 } 00:17:21.303 ] 00:17:21.303 }' 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.303 12:43:20 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 [2024-12-14 12:43:21.447682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:21.872 [2024-12-14 12:43:21.447799] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.872 [2024-12-14 12:43:21.540113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.872 [2024-12-14 12:43:21.540243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.872 [2024-12-14 12:43:21.540289] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 87669 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 87669 ']' 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 87669 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.872 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87669 00:17:22.132 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:22.132 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:22.132 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87669' 00:17:22.132 killing process with pid 87669 00:17:22.132 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 87669 00:17:22.132 [2024-12-14 12:43:21.623951] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:22.132 12:43:21 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 87669 00:17:22.132 [2024-12-14 12:43:21.640597] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:23.072 12:43:22 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:23.072 00:17:23.072 real 0m4.907s 00:17:23.072 user 0m7.060s 00:17:23.072 sys 0m0.822s 00:17:23.072 ************************************ 00:17:23.072 END TEST raid_state_function_test_sb_4k 00:17:23.072 ************************************ 00:17:23.072 12:43:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.072 12:43:22 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.072 12:43:22 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:23.072 12:43:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:23.072 12:43:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.072 12:43:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:23.072 ************************************ 00:17:23.072 START TEST raid_superblock_test_4k 00:17:23.072 ************************************ 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:23.072 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=87916 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 87916 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 87916 ']' 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:23.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:23.332 12:43:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.332 [2024-12-14 12:43:22.893821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:23.332 [2024-12-14 12:43:22.894036] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87916 ] 00:17:23.332 [2024-12-14 12:43:23.065471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.591 [2024-12-14 12:43:23.175987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.851 [2024-12-14 12:43:23.369889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.851 [2024-12-14 12:43:23.370014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.111 malloc1 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.111 [2024-12-14 12:43:23.763477] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.111 [2024-12-14 12:43:23.763588] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.111 [2024-12-14 12:43:23.763629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:24.111 [2024-12-14 12:43:23.763683] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.111 [2024-12-14 12:43:23.765753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.111 [2024-12-14 12:43:23.765824] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.111 pt1 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.111 malloc2 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.111 [2024-12-14 12:43:23.819251] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:24.111 [2024-12-14 12:43:23.819303] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.111 [2024-12-14 12:43:23.819325] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:24.111 [2024-12-14 12:43:23.819334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.111 [2024-12-14 12:43:23.821367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.111 [2024-12-14 12:43:23.821403] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:24.111 pt2 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.111 [2024-12-14 12:43:23.831290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.111 [2024-12-14 12:43:23.833007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:24.111 [2024-12-14 12:43:23.833239] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:24.111 [2024-12-14 12:43:23.833289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:24.111 [2024-12-14 12:43:23.833541] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:24.111 [2024-12-14 12:43:23.833698] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:24.111 [2024-12-14 12:43:23.833714] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:24.111 [2024-12-14 12:43:23.833854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.111 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.371 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.371 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.371 "name": "raid_bdev1", 00:17:24.371 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:24.371 "strip_size_kb": 0, 00:17:24.371 "state": "online", 00:17:24.371 "raid_level": "raid1", 00:17:24.371 "superblock": true, 00:17:24.371 "num_base_bdevs": 2, 00:17:24.371 "num_base_bdevs_discovered": 2, 00:17:24.371 "num_base_bdevs_operational": 2, 00:17:24.371 "base_bdevs_list": [ 00:17:24.371 { 00:17:24.371 "name": "pt1", 00:17:24.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.371 "is_configured": true, 00:17:24.371 "data_offset": 256, 00:17:24.371 "data_size": 7936 00:17:24.371 }, 00:17:24.371 { 00:17:24.371 "name": "pt2", 00:17:24.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.371 "is_configured": true, 00:17:24.371 "data_offset": 256, 00:17:24.371 "data_size": 7936 00:17:24.371 } 00:17:24.371 ] 00:17:24.371 }' 00:17:24.371 12:43:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.371 12:43:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.631 [2024-12-14 12:43:24.278802] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:24.631 "name": "raid_bdev1", 00:17:24.631 "aliases": [ 00:17:24.631 "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e" 00:17:24.631 ], 00:17:24.631 "product_name": "Raid Volume", 00:17:24.631 "block_size": 4096, 00:17:24.631 "num_blocks": 7936, 00:17:24.631 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:24.631 "assigned_rate_limits": { 00:17:24.631 "rw_ios_per_sec": 0, 00:17:24.631 "rw_mbytes_per_sec": 0, 00:17:24.631 "r_mbytes_per_sec": 0, 00:17:24.631 "w_mbytes_per_sec": 0 00:17:24.631 }, 00:17:24.631 "claimed": false, 00:17:24.631 "zoned": false, 00:17:24.631 "supported_io_types": { 00:17:24.631 "read": true, 00:17:24.631 "write": true, 00:17:24.631 "unmap": false, 00:17:24.631 "flush": false, 00:17:24.631 "reset": true, 00:17:24.631 "nvme_admin": false, 00:17:24.631 "nvme_io": false, 00:17:24.631 "nvme_io_md": false, 00:17:24.631 "write_zeroes": true, 00:17:24.631 "zcopy": false, 00:17:24.631 "get_zone_info": false, 00:17:24.631 "zone_management": false, 00:17:24.631 "zone_append": false, 00:17:24.631 "compare": false, 00:17:24.631 "compare_and_write": false, 00:17:24.631 "abort": false, 00:17:24.631 "seek_hole": false, 00:17:24.631 "seek_data": false, 00:17:24.631 "copy": false, 00:17:24.631 "nvme_iov_md": false 00:17:24.631 }, 00:17:24.631 "memory_domains": [ 00:17:24.631 { 00:17:24.631 "dma_device_id": "system", 00:17:24.631 "dma_device_type": 1 00:17:24.631 }, 00:17:24.631 { 00:17:24.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.631 "dma_device_type": 2 00:17:24.631 }, 00:17:24.631 { 00:17:24.631 "dma_device_id": "system", 00:17:24.631 "dma_device_type": 1 00:17:24.631 }, 00:17:24.631 { 00:17:24.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.631 "dma_device_type": 2 00:17:24.631 } 00:17:24.631 ], 00:17:24.631 "driver_specific": { 00:17:24.631 "raid": { 00:17:24.631 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:24.631 "strip_size_kb": 0, 00:17:24.631 "state": "online", 00:17:24.631 "raid_level": "raid1", 00:17:24.631 "superblock": true, 00:17:24.631 "num_base_bdevs": 2, 00:17:24.631 "num_base_bdevs_discovered": 2, 00:17:24.631 "num_base_bdevs_operational": 2, 00:17:24.631 "base_bdevs_list": [ 00:17:24.631 { 00:17:24.631 "name": "pt1", 00:17:24.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.631 "is_configured": true, 00:17:24.631 "data_offset": 256, 00:17:24.631 "data_size": 7936 00:17:24.631 }, 00:17:24.631 { 00:17:24.631 "name": "pt2", 00:17:24.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.631 "is_configured": true, 00:17:24.631 "data_offset": 256, 00:17:24.631 "data_size": 7936 00:17:24.631 } 00:17:24.631 ] 00:17:24.631 } 00:17:24.631 } 00:17:24.631 }' 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:24.631 pt2' 00:17:24.631 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.891 [2024-12-14 12:43:24.514338] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=522e0fbb-36a2-4e26-92d8-7a4e6880ea9e 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 522e0fbb-36a2-4e26-92d8-7a4e6880ea9e ']' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.891 [2024-12-14 12:43:24.542011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.891 [2024-12-14 12:43:24.542085] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.891 [2024-12-14 12:43:24.542185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.891 [2024-12-14 12:43:24.542276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.891 [2024-12-14 12:43:24.542324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.891 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.151 [2024-12-14 12:43:24.673813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:25.151 [2024-12-14 12:43:24.675679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:25.151 [2024-12-14 12:43:24.675817] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:25.151 [2024-12-14 12:43:24.675922] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:25.151 [2024-12-14 12:43:24.675978] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:25.151 [2024-12-14 12:43:24.676011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:25.151 request: 00:17:25.151 { 00:17:25.151 "name": "raid_bdev1", 00:17:25.151 "raid_level": "raid1", 00:17:25.151 "base_bdevs": [ 00:17:25.151 "malloc1", 00:17:25.151 "malloc2" 00:17:25.151 ], 00:17:25.151 "superblock": false, 00:17:25.151 "method": "bdev_raid_create", 00:17:25.151 "req_id": 1 00:17:25.151 } 00:17:25.151 Got JSON-RPC error response 00:17:25.151 response: 00:17:25.151 { 00:17:25.151 "code": -17, 00:17:25.151 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:25.151 } 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.151 [2024-12-14 12:43:24.717721] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:25.151 [2024-12-14 12:43:24.717805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.151 [2024-12-14 12:43:24.717824] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:25.151 [2024-12-14 12:43:24.717834] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.151 [2024-12-14 12:43:24.719978] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.151 [2024-12-14 12:43:24.720017] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:25.151 [2024-12-14 12:43:24.720099] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:25.151 [2024-12-14 12:43:24.720149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:25.151 pt1 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.151 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.152 "name": "raid_bdev1", 00:17:25.152 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:25.152 "strip_size_kb": 0, 00:17:25.152 "state": "configuring", 00:17:25.152 "raid_level": "raid1", 00:17:25.152 "superblock": true, 00:17:25.152 "num_base_bdevs": 2, 00:17:25.152 "num_base_bdevs_discovered": 1, 00:17:25.152 "num_base_bdevs_operational": 2, 00:17:25.152 "base_bdevs_list": [ 00:17:25.152 { 00:17:25.152 "name": "pt1", 00:17:25.152 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.152 "is_configured": true, 00:17:25.152 "data_offset": 256, 00:17:25.152 "data_size": 7936 00:17:25.152 }, 00:17:25.152 { 00:17:25.152 "name": null, 00:17:25.152 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.152 "is_configured": false, 00:17:25.152 "data_offset": 256, 00:17:25.152 "data_size": 7936 00:17:25.152 } 00:17:25.152 ] 00:17:25.152 }' 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.152 12:43:24 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.411 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:25.411 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:25.411 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:25.411 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.411 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.411 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.411 [2024-12-14 12:43:25.145021] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.411 [2024-12-14 12:43:25.145159] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.411 [2024-12-14 12:43:25.145186] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:25.411 [2024-12-14 12:43:25.145196] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.411 [2024-12-14 12:43:25.145628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.411 [2024-12-14 12:43:25.145649] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.411 [2024-12-14 12:43:25.145725] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:25.411 [2024-12-14 12:43:25.145749] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.411 [2024-12-14 12:43:25.145857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:25.411 [2024-12-14 12:43:25.145868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.411 [2024-12-14 12:43:25.146116] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:25.668 [2024-12-14 12:43:25.146266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:25.668 [2024-12-14 12:43:25.146279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:25.668 [2024-12-14 12:43:25.146424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.668 pt2 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.668 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.669 "name": "raid_bdev1", 00:17:25.669 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:25.669 "strip_size_kb": 0, 00:17:25.669 "state": "online", 00:17:25.669 "raid_level": "raid1", 00:17:25.669 "superblock": true, 00:17:25.669 "num_base_bdevs": 2, 00:17:25.669 "num_base_bdevs_discovered": 2, 00:17:25.669 "num_base_bdevs_operational": 2, 00:17:25.669 "base_bdevs_list": [ 00:17:25.669 { 00:17:25.669 "name": "pt1", 00:17:25.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.669 "is_configured": true, 00:17:25.669 "data_offset": 256, 00:17:25.669 "data_size": 7936 00:17:25.669 }, 00:17:25.669 { 00:17:25.669 "name": "pt2", 00:17:25.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.669 "is_configured": true, 00:17:25.669 "data_offset": 256, 00:17:25.669 "data_size": 7936 00:17:25.669 } 00:17:25.669 ] 00:17:25.669 }' 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.669 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.926 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.927 [2024-12-14 12:43:25.568518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.927 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.927 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:25.927 "name": "raid_bdev1", 00:17:25.927 "aliases": [ 00:17:25.927 "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e" 00:17:25.927 ], 00:17:25.927 "product_name": "Raid Volume", 00:17:25.927 "block_size": 4096, 00:17:25.927 "num_blocks": 7936, 00:17:25.927 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:25.927 "assigned_rate_limits": { 00:17:25.927 "rw_ios_per_sec": 0, 00:17:25.927 "rw_mbytes_per_sec": 0, 00:17:25.927 "r_mbytes_per_sec": 0, 00:17:25.927 "w_mbytes_per_sec": 0 00:17:25.927 }, 00:17:25.927 "claimed": false, 00:17:25.927 "zoned": false, 00:17:25.927 "supported_io_types": { 00:17:25.927 "read": true, 00:17:25.927 "write": true, 00:17:25.927 "unmap": false, 00:17:25.927 "flush": false, 00:17:25.927 "reset": true, 00:17:25.927 "nvme_admin": false, 00:17:25.927 "nvme_io": false, 00:17:25.927 "nvme_io_md": false, 00:17:25.927 "write_zeroes": true, 00:17:25.927 "zcopy": false, 00:17:25.927 "get_zone_info": false, 00:17:25.927 "zone_management": false, 00:17:25.927 "zone_append": false, 00:17:25.927 "compare": false, 00:17:25.927 "compare_and_write": false, 00:17:25.927 "abort": false, 00:17:25.927 "seek_hole": false, 00:17:25.927 "seek_data": false, 00:17:25.927 "copy": false, 00:17:25.927 "nvme_iov_md": false 00:17:25.927 }, 00:17:25.927 "memory_domains": [ 00:17:25.927 { 00:17:25.927 "dma_device_id": "system", 00:17:25.927 "dma_device_type": 1 00:17:25.927 }, 00:17:25.927 { 00:17:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.927 "dma_device_type": 2 00:17:25.927 }, 00:17:25.927 { 00:17:25.927 "dma_device_id": "system", 00:17:25.927 "dma_device_type": 1 00:17:25.927 }, 00:17:25.927 { 00:17:25.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.927 "dma_device_type": 2 00:17:25.927 } 00:17:25.927 ], 00:17:25.927 "driver_specific": { 00:17:25.927 "raid": { 00:17:25.927 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:25.927 "strip_size_kb": 0, 00:17:25.927 "state": "online", 00:17:25.927 "raid_level": "raid1", 00:17:25.927 "superblock": true, 00:17:25.927 "num_base_bdevs": 2, 00:17:25.927 "num_base_bdevs_discovered": 2, 00:17:25.927 "num_base_bdevs_operational": 2, 00:17:25.927 "base_bdevs_list": [ 00:17:25.927 { 00:17:25.927 "name": "pt1", 00:17:25.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.927 "is_configured": true, 00:17:25.927 "data_offset": 256, 00:17:25.927 "data_size": 7936 00:17:25.927 }, 00:17:25.927 { 00:17:25.927 "name": "pt2", 00:17:25.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.927 "is_configured": true, 00:17:25.927 "data_offset": 256, 00:17:25.927 "data_size": 7936 00:17:25.927 } 00:17:25.927 ] 00:17:25.927 } 00:17:25.927 } 00:17:25.927 }' 00:17:25.927 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:25.927 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:25.927 pt2' 00:17:25.927 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:26.186 [2024-12-14 12:43:25.788116] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 522e0fbb-36a2-4e26-92d8-7a4e6880ea9e '!=' 522e0fbb-36a2-4e26-92d8-7a4e6880ea9e ']' 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.186 [2024-12-14 12:43:25.835857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.186 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.186 "name": "raid_bdev1", 00:17:26.186 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:26.186 "strip_size_kb": 0, 00:17:26.186 "state": "online", 00:17:26.186 "raid_level": "raid1", 00:17:26.186 "superblock": true, 00:17:26.186 "num_base_bdevs": 2, 00:17:26.186 "num_base_bdevs_discovered": 1, 00:17:26.186 "num_base_bdevs_operational": 1, 00:17:26.186 "base_bdevs_list": [ 00:17:26.186 { 00:17:26.186 "name": null, 00:17:26.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.186 "is_configured": false, 00:17:26.186 "data_offset": 0, 00:17:26.186 "data_size": 7936 00:17:26.186 }, 00:17:26.186 { 00:17:26.186 "name": "pt2", 00:17:26.186 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.186 "is_configured": true, 00:17:26.186 "data_offset": 256, 00:17:26.186 "data_size": 7936 00:17:26.186 } 00:17:26.187 ] 00:17:26.187 }' 00:17:26.187 12:43:25 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.187 12:43:25 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.755 [2024-12-14 12:43:26.283077] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.755 [2024-12-14 12:43:26.283144] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.755 [2024-12-14 12:43:26.283235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.755 [2024-12-14 12:43:26.283298] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.755 [2024-12-14 12:43:26.283368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.755 [2024-12-14 12:43:26.354919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:26.755 [2024-12-14 12:43:26.354969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.755 [2024-12-14 12:43:26.354984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:26.755 [2024-12-14 12:43:26.354994] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.755 [2024-12-14 12:43:26.357153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.755 [2024-12-14 12:43:26.357190] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:26.755 [2024-12-14 12:43:26.357262] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:26.755 [2024-12-14 12:43:26.357311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:26.755 [2024-12-14 12:43:26.357418] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:26.755 [2024-12-14 12:43:26.357430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:26.755 [2024-12-14 12:43:26.357660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:26.755 [2024-12-14 12:43:26.357821] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:26.755 [2024-12-14 12:43:26.357830] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:26.755 [2024-12-14 12:43:26.357974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.755 pt2 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.755 "name": "raid_bdev1", 00:17:26.755 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:26.755 "strip_size_kb": 0, 00:17:26.755 "state": "online", 00:17:26.755 "raid_level": "raid1", 00:17:26.755 "superblock": true, 00:17:26.755 "num_base_bdevs": 2, 00:17:26.755 "num_base_bdevs_discovered": 1, 00:17:26.755 "num_base_bdevs_operational": 1, 00:17:26.755 "base_bdevs_list": [ 00:17:26.755 { 00:17:26.755 "name": null, 00:17:26.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.755 "is_configured": false, 00:17:26.755 "data_offset": 256, 00:17:26.755 "data_size": 7936 00:17:26.755 }, 00:17:26.755 { 00:17:26.755 "name": "pt2", 00:17:26.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.755 "is_configured": true, 00:17:26.755 "data_offset": 256, 00:17:26.755 "data_size": 7936 00:17:26.755 } 00:17:26.755 ] 00:17:26.755 }' 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.755 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.324 [2024-12-14 12:43:26.778194] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.324 [2024-12-14 12:43:26.778266] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:27.324 [2024-12-14 12:43:26.778355] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.324 [2024-12-14 12:43:26.778420] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.324 [2024-12-14 12:43:26.778463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.324 [2024-12-14 12:43:26.838136] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.324 [2024-12-14 12:43:26.838251] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.324 [2024-12-14 12:43:26.838291] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:27.324 [2024-12-14 12:43:26.838349] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.324 [2024-12-14 12:43:26.840499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.324 [2024-12-14 12:43:26.840568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.324 [2024-12-14 12:43:26.840665] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:27.324 [2024-12-14 12:43:26.840745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.324 [2024-12-14 12:43:26.840915] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:27.324 [2024-12-14 12:43:26.840971] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.324 [2024-12-14 12:43:26.841011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:27.324 [2024-12-14 12:43:26.841123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:27.324 [2024-12-14 12:43:26.841226] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:27.324 [2024-12-14 12:43:26.841264] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:27.324 [2024-12-14 12:43:26.841526] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:27.324 [2024-12-14 12:43:26.841708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:27.324 [2024-12-14 12:43:26.841754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:27.324 [2024-12-14 12:43:26.841944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.324 pt1 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.324 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.325 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.325 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.325 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.325 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.325 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.325 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.325 "name": "raid_bdev1", 00:17:27.325 "uuid": "522e0fbb-36a2-4e26-92d8-7a4e6880ea9e", 00:17:27.325 "strip_size_kb": 0, 00:17:27.325 "state": "online", 00:17:27.325 "raid_level": "raid1", 00:17:27.325 "superblock": true, 00:17:27.325 "num_base_bdevs": 2, 00:17:27.325 "num_base_bdevs_discovered": 1, 00:17:27.325 "num_base_bdevs_operational": 1, 00:17:27.325 "base_bdevs_list": [ 00:17:27.325 { 00:17:27.325 "name": null, 00:17:27.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.325 "is_configured": false, 00:17:27.325 "data_offset": 256, 00:17:27.325 "data_size": 7936 00:17:27.325 }, 00:17:27.325 { 00:17:27.325 "name": "pt2", 00:17:27.325 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.325 "is_configured": true, 00:17:27.325 "data_offset": 256, 00:17:27.325 "data_size": 7936 00:17:27.325 } 00:17:27.325 ] 00:17:27.325 }' 00:17:27.325 12:43:26 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.325 12:43:26 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.584 12:43:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:27.584 12:43:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:27.584 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.584 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.584 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.584 12:43:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.845 [2024-12-14 12:43:27.329482] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 522e0fbb-36a2-4e26-92d8-7a4e6880ea9e '!=' 522e0fbb-36a2-4e26-92d8-7a4e6880ea9e ']' 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 87916 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 87916 ']' 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 87916 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87916 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87916' 00:17:27.845 killing process with pid 87916 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 87916 00:17:27.845 [2024-12-14 12:43:27.408062] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.845 [2024-12-14 12:43:27.408137] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.845 [2024-12-14 12:43:27.408182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.845 [2024-12-14 12:43:27.408197] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:27.845 12:43:27 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 87916 00:17:28.105 [2024-12-14 12:43:27.612992] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:29.046 12:43:28 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:29.046 ************************************ 00:17:29.046 END TEST raid_superblock_test_4k 00:17:29.046 ************************************ 00:17:29.046 00:17:29.046 real 0m5.903s 00:17:29.046 user 0m8.959s 00:17:29.046 sys 0m0.992s 00:17:29.046 12:43:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.046 12:43:28 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.046 12:43:28 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:29.046 12:43:28 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:29.046 12:43:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:29.046 12:43:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.046 12:43:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:29.046 ************************************ 00:17:29.046 START TEST raid_rebuild_test_sb_4k 00:17:29.046 ************************************ 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:29.046 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=88243 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 88243 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 88243 ']' 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.306 12:43:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.306 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:29.306 Zero copy mechanism will not be used. 00:17:29.306 [2024-12-14 12:43:28.877495] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:29.306 [2024-12-14 12:43:28.877698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88243 ] 00:17:29.566 [2024-12-14 12:43:29.049569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.566 [2024-12-14 12:43:29.154692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.826 [2024-12-14 12:43:29.353942] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.826 [2024-12-14 12:43:29.354085] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.087 BaseBdev1_malloc 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.087 [2024-12-14 12:43:29.734722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:30.087 [2024-12-14 12:43:29.734824] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.087 [2024-12-14 12:43:29.734849] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:30.087 [2024-12-14 12:43:29.734861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.087 [2024-12-14 12:43:29.736910] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.087 [2024-12-14 12:43:29.736952] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.087 BaseBdev1 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.087 BaseBdev2_malloc 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.087 [2024-12-14 12:43:29.785145] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:30.087 [2024-12-14 12:43:29.785242] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.087 [2024-12-14 12:43:29.785263] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:30.087 [2024-12-14 12:43:29.785276] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.087 [2024-12-14 12:43:29.787304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.087 [2024-12-14 12:43:29.787342] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:30.087 BaseBdev2 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.087 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.346 spare_malloc 00:17:30.346 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.346 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:30.346 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.346 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.346 spare_delay 00:17:30.346 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.347 [2024-12-14 12:43:29.865033] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:30.347 [2024-12-14 12:43:29.865097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.347 [2024-12-14 12:43:29.865116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:30.347 [2024-12-14 12:43:29.865126] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.347 [2024-12-14 12:43:29.867184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.347 [2024-12-14 12:43:29.867271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:30.347 spare 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.347 [2024-12-14 12:43:29.877063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.347 [2024-12-14 12:43:29.878849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.347 [2024-12-14 12:43:29.879094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:30.347 [2024-12-14 12:43:29.879143] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.347 [2024-12-14 12:43:29.879402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:30.347 [2024-12-14 12:43:29.879618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:30.347 [2024-12-14 12:43:29.879659] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:30.347 [2024-12-14 12:43:29.879849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.347 "name": "raid_bdev1", 00:17:30.347 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:30.347 "strip_size_kb": 0, 00:17:30.347 "state": "online", 00:17:30.347 "raid_level": "raid1", 00:17:30.347 "superblock": true, 00:17:30.347 "num_base_bdevs": 2, 00:17:30.347 "num_base_bdevs_discovered": 2, 00:17:30.347 "num_base_bdevs_operational": 2, 00:17:30.347 "base_bdevs_list": [ 00:17:30.347 { 00:17:30.347 "name": "BaseBdev1", 00:17:30.347 "uuid": "985d9c04-2e34-5ea8-bfa1-d2b027001cf1", 00:17:30.347 "is_configured": true, 00:17:30.347 "data_offset": 256, 00:17:30.347 "data_size": 7936 00:17:30.347 }, 00:17:30.347 { 00:17:30.347 "name": "BaseBdev2", 00:17:30.347 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:30.347 "is_configured": true, 00:17:30.347 "data_offset": 256, 00:17:30.347 "data_size": 7936 00:17:30.347 } 00:17:30.347 ] 00:17:30.347 }' 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.347 12:43:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.606 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:30.606 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.606 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.606 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.866 [2024-12-14 12:43:30.348521] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.866 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:31.126 [2024-12-14 12:43:30.639809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:31.126 /dev/nbd0 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:31.126 1+0 records in 00:17:31.126 1+0 records out 00:17:31.126 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257451 s, 15.9 MB/s 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:31.126 12:43:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:31.699 7936+0 records in 00:17:31.699 7936+0 records out 00:17:31.699 32505856 bytes (33 MB, 31 MiB) copied, 0.56755 s, 57.3 MB/s 00:17:31.699 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:31.699 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.699 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:31.699 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.699 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:31.699 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.699 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:31.959 [2024-12-14 12:43:31.457596] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.959 [2024-12-14 12:43:31.489632] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.959 "name": "raid_bdev1", 00:17:31.959 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:31.959 "strip_size_kb": 0, 00:17:31.959 "state": "online", 00:17:31.959 "raid_level": "raid1", 00:17:31.959 "superblock": true, 00:17:31.959 "num_base_bdevs": 2, 00:17:31.959 "num_base_bdevs_discovered": 1, 00:17:31.959 "num_base_bdevs_operational": 1, 00:17:31.959 "base_bdevs_list": [ 00:17:31.959 { 00:17:31.959 "name": null, 00:17:31.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.959 "is_configured": false, 00:17:31.959 "data_offset": 0, 00:17:31.959 "data_size": 7936 00:17:31.959 }, 00:17:31.959 { 00:17:31.959 "name": "BaseBdev2", 00:17:31.959 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:31.959 "is_configured": true, 00:17:31.959 "data_offset": 256, 00:17:31.959 "data_size": 7936 00:17:31.959 } 00:17:31.959 ] 00:17:31.959 }' 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.959 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.219 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:32.219 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.219 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.219 [2024-12-14 12:43:31.861019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.219 [2024-12-14 12:43:31.876080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:32.219 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.219 12:43:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:32.219 [2024-12-14 12:43:31.877904] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.158 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.418 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.418 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.418 "name": "raid_bdev1", 00:17:33.418 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:33.418 "strip_size_kb": 0, 00:17:33.418 "state": "online", 00:17:33.418 "raid_level": "raid1", 00:17:33.418 "superblock": true, 00:17:33.418 "num_base_bdevs": 2, 00:17:33.418 "num_base_bdevs_discovered": 2, 00:17:33.419 "num_base_bdevs_operational": 2, 00:17:33.419 "process": { 00:17:33.419 "type": "rebuild", 00:17:33.419 "target": "spare", 00:17:33.419 "progress": { 00:17:33.419 "blocks": 2560, 00:17:33.419 "percent": 32 00:17:33.419 } 00:17:33.419 }, 00:17:33.419 "base_bdevs_list": [ 00:17:33.419 { 00:17:33.419 "name": "spare", 00:17:33.419 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:33.419 "is_configured": true, 00:17:33.419 "data_offset": 256, 00:17:33.419 "data_size": 7936 00:17:33.419 }, 00:17:33.419 { 00:17:33.419 "name": "BaseBdev2", 00:17:33.419 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:33.419 "is_configured": true, 00:17:33.419 "data_offset": 256, 00:17:33.419 "data_size": 7936 00:17:33.419 } 00:17:33.419 ] 00:17:33.419 }' 00:17:33.419 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.419 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.419 12:43:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.419 [2024-12-14 12:43:33.029684] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.419 [2024-12-14 12:43:33.082689] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:33.419 [2024-12-14 12:43:33.082746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.419 [2024-12-14 12:43:33.082760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.419 [2024-12-14 12:43:33.082772] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.419 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.679 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.679 "name": "raid_bdev1", 00:17:33.679 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:33.679 "strip_size_kb": 0, 00:17:33.679 "state": "online", 00:17:33.679 "raid_level": "raid1", 00:17:33.679 "superblock": true, 00:17:33.679 "num_base_bdevs": 2, 00:17:33.679 "num_base_bdevs_discovered": 1, 00:17:33.679 "num_base_bdevs_operational": 1, 00:17:33.679 "base_bdevs_list": [ 00:17:33.679 { 00:17:33.679 "name": null, 00:17:33.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.679 "is_configured": false, 00:17:33.679 "data_offset": 0, 00:17:33.679 "data_size": 7936 00:17:33.679 }, 00:17:33.679 { 00:17:33.679 "name": "BaseBdev2", 00:17:33.679 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:33.679 "is_configured": true, 00:17:33.679 "data_offset": 256, 00:17:33.679 "data_size": 7936 00:17:33.679 } 00:17:33.679 ] 00:17:33.679 }' 00:17:33.679 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.679 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.939 "name": "raid_bdev1", 00:17:33.939 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:33.939 "strip_size_kb": 0, 00:17:33.939 "state": "online", 00:17:33.939 "raid_level": "raid1", 00:17:33.939 "superblock": true, 00:17:33.939 "num_base_bdevs": 2, 00:17:33.939 "num_base_bdevs_discovered": 1, 00:17:33.939 "num_base_bdevs_operational": 1, 00:17:33.939 "base_bdevs_list": [ 00:17:33.939 { 00:17:33.939 "name": null, 00:17:33.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.939 "is_configured": false, 00:17:33.939 "data_offset": 0, 00:17:33.939 "data_size": 7936 00:17:33.939 }, 00:17:33.939 { 00:17:33.939 "name": "BaseBdev2", 00:17:33.939 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:33.939 "is_configured": true, 00:17:33.939 "data_offset": 256, 00:17:33.939 "data_size": 7936 00:17:33.939 } 00:17:33.939 ] 00:17:33.939 }' 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.939 [2024-12-14 12:43:33.649078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:33.939 [2024-12-14 12:43:33.664928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.939 12:43:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:33.939 [2024-12-14 12:43:33.666810] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.320 "name": "raid_bdev1", 00:17:35.320 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:35.320 "strip_size_kb": 0, 00:17:35.320 "state": "online", 00:17:35.320 "raid_level": "raid1", 00:17:35.320 "superblock": true, 00:17:35.320 "num_base_bdevs": 2, 00:17:35.320 "num_base_bdevs_discovered": 2, 00:17:35.320 "num_base_bdevs_operational": 2, 00:17:35.320 "process": { 00:17:35.320 "type": "rebuild", 00:17:35.320 "target": "spare", 00:17:35.320 "progress": { 00:17:35.320 "blocks": 2560, 00:17:35.320 "percent": 32 00:17:35.320 } 00:17:35.320 }, 00:17:35.320 "base_bdevs_list": [ 00:17:35.320 { 00:17:35.320 "name": "spare", 00:17:35.320 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:35.320 "is_configured": true, 00:17:35.320 "data_offset": 256, 00:17:35.320 "data_size": 7936 00:17:35.320 }, 00:17:35.320 { 00:17:35.320 "name": "BaseBdev2", 00:17:35.320 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:35.320 "is_configured": true, 00:17:35.320 "data_offset": 256, 00:17:35.320 "data_size": 7936 00:17:35.320 } 00:17:35.320 ] 00:17:35.320 }' 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.320 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:35.321 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=669 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.321 "name": "raid_bdev1", 00:17:35.321 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:35.321 "strip_size_kb": 0, 00:17:35.321 "state": "online", 00:17:35.321 "raid_level": "raid1", 00:17:35.321 "superblock": true, 00:17:35.321 "num_base_bdevs": 2, 00:17:35.321 "num_base_bdevs_discovered": 2, 00:17:35.321 "num_base_bdevs_operational": 2, 00:17:35.321 "process": { 00:17:35.321 "type": "rebuild", 00:17:35.321 "target": "spare", 00:17:35.321 "progress": { 00:17:35.321 "blocks": 2816, 00:17:35.321 "percent": 35 00:17:35.321 } 00:17:35.321 }, 00:17:35.321 "base_bdevs_list": [ 00:17:35.321 { 00:17:35.321 "name": "spare", 00:17:35.321 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:35.321 "is_configured": true, 00:17:35.321 "data_offset": 256, 00:17:35.321 "data_size": 7936 00:17:35.321 }, 00:17:35.321 { 00:17:35.321 "name": "BaseBdev2", 00:17:35.321 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:35.321 "is_configured": true, 00:17:35.321 "data_offset": 256, 00:17:35.321 "data_size": 7936 00:17:35.321 } 00:17:35.321 ] 00:17:35.321 }' 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.321 12:43:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.259 "name": "raid_bdev1", 00:17:36.259 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:36.259 "strip_size_kb": 0, 00:17:36.259 "state": "online", 00:17:36.259 "raid_level": "raid1", 00:17:36.259 "superblock": true, 00:17:36.259 "num_base_bdevs": 2, 00:17:36.259 "num_base_bdevs_discovered": 2, 00:17:36.259 "num_base_bdevs_operational": 2, 00:17:36.259 "process": { 00:17:36.259 "type": "rebuild", 00:17:36.259 "target": "spare", 00:17:36.259 "progress": { 00:17:36.259 "blocks": 5632, 00:17:36.259 "percent": 70 00:17:36.259 } 00:17:36.259 }, 00:17:36.259 "base_bdevs_list": [ 00:17:36.259 { 00:17:36.259 "name": "spare", 00:17:36.259 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:36.259 "is_configured": true, 00:17:36.259 "data_offset": 256, 00:17:36.259 "data_size": 7936 00:17:36.259 }, 00:17:36.259 { 00:17:36.259 "name": "BaseBdev2", 00:17:36.259 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:36.259 "is_configured": true, 00:17:36.259 "data_offset": 256, 00:17:36.259 "data_size": 7936 00:17:36.259 } 00:17:36.259 ] 00:17:36.259 }' 00:17:36.259 12:43:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.519 12:43:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.519 12:43:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.519 12:43:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.519 12:43:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.088 [2024-12-14 12:43:36.778612] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:37.088 [2024-12-14 12:43:36.778735] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:37.088 [2024-12-14 12:43:36.778859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.658 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.658 "name": "raid_bdev1", 00:17:37.658 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:37.658 "strip_size_kb": 0, 00:17:37.658 "state": "online", 00:17:37.658 "raid_level": "raid1", 00:17:37.658 "superblock": true, 00:17:37.658 "num_base_bdevs": 2, 00:17:37.658 "num_base_bdevs_discovered": 2, 00:17:37.658 "num_base_bdevs_operational": 2, 00:17:37.658 "base_bdevs_list": [ 00:17:37.658 { 00:17:37.658 "name": "spare", 00:17:37.658 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:37.658 "is_configured": true, 00:17:37.658 "data_offset": 256, 00:17:37.658 "data_size": 7936 00:17:37.658 }, 00:17:37.659 { 00:17:37.659 "name": "BaseBdev2", 00:17:37.659 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:37.659 "is_configured": true, 00:17:37.659 "data_offset": 256, 00:17:37.659 "data_size": 7936 00:17:37.659 } 00:17:37.659 ] 00:17:37.659 }' 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.659 "name": "raid_bdev1", 00:17:37.659 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:37.659 "strip_size_kb": 0, 00:17:37.659 "state": "online", 00:17:37.659 "raid_level": "raid1", 00:17:37.659 "superblock": true, 00:17:37.659 "num_base_bdevs": 2, 00:17:37.659 "num_base_bdevs_discovered": 2, 00:17:37.659 "num_base_bdevs_operational": 2, 00:17:37.659 "base_bdevs_list": [ 00:17:37.659 { 00:17:37.659 "name": "spare", 00:17:37.659 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:37.659 "is_configured": true, 00:17:37.659 "data_offset": 256, 00:17:37.659 "data_size": 7936 00:17:37.659 }, 00:17:37.659 { 00:17:37.659 "name": "BaseBdev2", 00:17:37.659 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:37.659 "is_configured": true, 00:17:37.659 "data_offset": 256, 00:17:37.659 "data_size": 7936 00:17:37.659 } 00:17:37.659 ] 00:17:37.659 }' 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.659 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.919 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.919 "name": "raid_bdev1", 00:17:37.919 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:37.919 "strip_size_kb": 0, 00:17:37.919 "state": "online", 00:17:37.919 "raid_level": "raid1", 00:17:37.919 "superblock": true, 00:17:37.919 "num_base_bdevs": 2, 00:17:37.919 "num_base_bdevs_discovered": 2, 00:17:37.919 "num_base_bdevs_operational": 2, 00:17:37.919 "base_bdevs_list": [ 00:17:37.919 { 00:17:37.919 "name": "spare", 00:17:37.919 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:37.919 "is_configured": true, 00:17:37.919 "data_offset": 256, 00:17:37.919 "data_size": 7936 00:17:37.919 }, 00:17:37.919 { 00:17:37.919 "name": "BaseBdev2", 00:17:37.919 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:37.919 "is_configured": true, 00:17:37.919 "data_offset": 256, 00:17:37.919 "data_size": 7936 00:17:37.919 } 00:17:37.919 ] 00:17:37.919 }' 00:17:37.919 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.919 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.178 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.178 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.178 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.178 [2024-12-14 12:43:37.819713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.178 [2024-12-14 12:43:37.819746] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.178 [2024-12-14 12:43:37.819826] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.178 [2024-12-14 12:43:37.819896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.178 [2024-12-14 12:43:37.819906] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:38.178 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.179 12:43:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:38.438 /dev/nbd0 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:38.438 1+0 records in 00:17:38.438 1+0 records out 00:17:38.438 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000479327 s, 8.5 MB/s 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.438 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:38.697 /dev/nbd1 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:38.697 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:38.698 1+0 records in 00:17:38.698 1+0 records out 00:17:38.698 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361081 s, 11.3 MB/s 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.698 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:38.957 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:38.957 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.957 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.957 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.957 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:38.957 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.957 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.216 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.475 [2024-12-14 12:43:38.971869] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:39.475 [2024-12-14 12:43:38.971968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.475 [2024-12-14 12:43:38.971997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:39.475 [2024-12-14 12:43:38.972006] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.475 [2024-12-14 12:43:38.974212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.475 [2024-12-14 12:43:38.974246] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:39.475 [2024-12-14 12:43:38.974338] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:39.475 [2024-12-14 12:43:38.974387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.475 [2024-12-14 12:43:38.974526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.475 spare 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.475 12:43:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.475 [2024-12-14 12:43:39.074436] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:39.475 [2024-12-14 12:43:39.074469] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.475 [2024-12-14 12:43:39.074751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:39.475 [2024-12-14 12:43:39.074954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:39.475 [2024-12-14 12:43:39.074973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:39.475 [2024-12-14 12:43:39.075167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.475 "name": "raid_bdev1", 00:17:39.475 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:39.475 "strip_size_kb": 0, 00:17:39.475 "state": "online", 00:17:39.475 "raid_level": "raid1", 00:17:39.475 "superblock": true, 00:17:39.475 "num_base_bdevs": 2, 00:17:39.475 "num_base_bdevs_discovered": 2, 00:17:39.475 "num_base_bdevs_operational": 2, 00:17:39.475 "base_bdevs_list": [ 00:17:39.475 { 00:17:39.475 "name": "spare", 00:17:39.475 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:39.475 "is_configured": true, 00:17:39.475 "data_offset": 256, 00:17:39.475 "data_size": 7936 00:17:39.475 }, 00:17:39.475 { 00:17:39.475 "name": "BaseBdev2", 00:17:39.475 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:39.475 "is_configured": true, 00:17:39.475 "data_offset": 256, 00:17:39.475 "data_size": 7936 00:17:39.475 } 00:17:39.475 ] 00:17:39.475 }' 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.475 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.043 "name": "raid_bdev1", 00:17:40.043 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:40.043 "strip_size_kb": 0, 00:17:40.043 "state": "online", 00:17:40.043 "raid_level": "raid1", 00:17:40.043 "superblock": true, 00:17:40.043 "num_base_bdevs": 2, 00:17:40.043 "num_base_bdevs_discovered": 2, 00:17:40.043 "num_base_bdevs_operational": 2, 00:17:40.043 "base_bdevs_list": [ 00:17:40.043 { 00:17:40.043 "name": "spare", 00:17:40.043 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:40.043 "is_configured": true, 00:17:40.043 "data_offset": 256, 00:17:40.043 "data_size": 7936 00:17:40.043 }, 00:17:40.043 { 00:17:40.043 "name": "BaseBdev2", 00:17:40.043 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:40.043 "is_configured": true, 00:17:40.043 "data_offset": 256, 00:17:40.043 "data_size": 7936 00:17:40.043 } 00:17:40.043 ] 00:17:40.043 }' 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.043 [2024-12-14 12:43:39.710685] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.043 "name": "raid_bdev1", 00:17:40.043 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:40.043 "strip_size_kb": 0, 00:17:40.043 "state": "online", 00:17:40.043 "raid_level": "raid1", 00:17:40.043 "superblock": true, 00:17:40.043 "num_base_bdevs": 2, 00:17:40.043 "num_base_bdevs_discovered": 1, 00:17:40.043 "num_base_bdevs_operational": 1, 00:17:40.043 "base_bdevs_list": [ 00:17:40.043 { 00:17:40.043 "name": null, 00:17:40.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.043 "is_configured": false, 00:17:40.043 "data_offset": 0, 00:17:40.043 "data_size": 7936 00:17:40.043 }, 00:17:40.043 { 00:17:40.043 "name": "BaseBdev2", 00:17:40.043 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:40.043 "is_configured": true, 00:17:40.043 "data_offset": 256, 00:17:40.043 "data_size": 7936 00:17:40.043 } 00:17:40.043 ] 00:17:40.043 }' 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.043 12:43:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.611 12:43:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.611 12:43:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.611 12:43:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.611 [2024-12-14 12:43:40.161935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.611 [2024-12-14 12:43:40.162146] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.611 [2024-12-14 12:43:40.162169] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:40.611 [2024-12-14 12:43:40.162208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.611 [2024-12-14 12:43:40.177865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:40.611 12:43:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.611 12:43:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:40.611 [2024-12-14 12:43:40.179750] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.550 "name": "raid_bdev1", 00:17:41.550 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:41.550 "strip_size_kb": 0, 00:17:41.550 "state": "online", 00:17:41.550 "raid_level": "raid1", 00:17:41.550 "superblock": true, 00:17:41.550 "num_base_bdevs": 2, 00:17:41.550 "num_base_bdevs_discovered": 2, 00:17:41.550 "num_base_bdevs_operational": 2, 00:17:41.550 "process": { 00:17:41.550 "type": "rebuild", 00:17:41.550 "target": "spare", 00:17:41.550 "progress": { 00:17:41.550 "blocks": 2560, 00:17:41.550 "percent": 32 00:17:41.550 } 00:17:41.550 }, 00:17:41.550 "base_bdevs_list": [ 00:17:41.550 { 00:17:41.550 "name": "spare", 00:17:41.550 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:41.550 "is_configured": true, 00:17:41.550 "data_offset": 256, 00:17:41.550 "data_size": 7936 00:17:41.550 }, 00:17:41.550 { 00:17:41.550 "name": "BaseBdev2", 00:17:41.550 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:41.550 "is_configured": true, 00:17:41.550 "data_offset": 256, 00:17:41.550 "data_size": 7936 00:17:41.550 } 00:17:41.550 ] 00:17:41.550 }' 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.550 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.809 [2024-12-14 12:43:41.335525] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.809 [2024-12-14 12:43:41.384454] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:41.809 [2024-12-14 12:43:41.384512] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.809 [2024-12-14 12:43:41.384542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:41.809 [2024-12-14 12:43:41.384551] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.809 "name": "raid_bdev1", 00:17:41.809 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:41.809 "strip_size_kb": 0, 00:17:41.809 "state": "online", 00:17:41.809 "raid_level": "raid1", 00:17:41.809 "superblock": true, 00:17:41.809 "num_base_bdevs": 2, 00:17:41.809 "num_base_bdevs_discovered": 1, 00:17:41.809 "num_base_bdevs_operational": 1, 00:17:41.809 "base_bdevs_list": [ 00:17:41.809 { 00:17:41.809 "name": null, 00:17:41.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:41.809 "is_configured": false, 00:17:41.809 "data_offset": 0, 00:17:41.809 "data_size": 7936 00:17:41.809 }, 00:17:41.809 { 00:17:41.809 "name": "BaseBdev2", 00:17:41.809 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:41.809 "is_configured": true, 00:17:41.809 "data_offset": 256, 00:17:41.809 "data_size": 7936 00:17:41.809 } 00:17:41.809 ] 00:17:41.809 }' 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.809 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.398 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.398 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.398 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.398 [2024-12-14 12:43:41.905913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.398 [2024-12-14 12:43:41.905980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.398 [2024-12-14 12:43:41.906003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:42.398 [2024-12-14 12:43:41.906013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.398 [2024-12-14 12:43:41.906516] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.398 [2024-12-14 12:43:41.906552] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.398 [2024-12-14 12:43:41.906655] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:42.398 [2024-12-14 12:43:41.906673] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:42.398 [2024-12-14 12:43:41.906684] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:42.398 [2024-12-14 12:43:41.906715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.398 [2024-12-14 12:43:41.922354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:42.398 spare 00:17:42.398 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.398 12:43:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:42.398 [2024-12-14 12:43:41.924229] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.356 "name": "raid_bdev1", 00:17:43.356 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:43.356 "strip_size_kb": 0, 00:17:43.356 "state": "online", 00:17:43.356 "raid_level": "raid1", 00:17:43.356 "superblock": true, 00:17:43.356 "num_base_bdevs": 2, 00:17:43.356 "num_base_bdevs_discovered": 2, 00:17:43.356 "num_base_bdevs_operational": 2, 00:17:43.356 "process": { 00:17:43.356 "type": "rebuild", 00:17:43.356 "target": "spare", 00:17:43.356 "progress": { 00:17:43.356 "blocks": 2560, 00:17:43.356 "percent": 32 00:17:43.356 } 00:17:43.356 }, 00:17:43.356 "base_bdevs_list": [ 00:17:43.356 { 00:17:43.356 "name": "spare", 00:17:43.356 "uuid": "32200292-2738-5b4f-b17d-46f0ec311842", 00:17:43.356 "is_configured": true, 00:17:43.356 "data_offset": 256, 00:17:43.356 "data_size": 7936 00:17:43.356 }, 00:17:43.356 { 00:17:43.356 "name": "BaseBdev2", 00:17:43.356 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:43.356 "is_configured": true, 00:17:43.356 "data_offset": 256, 00:17:43.356 "data_size": 7936 00:17:43.356 } 00:17:43.356 ] 00:17:43.356 }' 00:17:43.356 12:43:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.357 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.357 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.357 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.357 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:43.357 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.357 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.357 [2024-12-14 12:43:43.072148] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.616 [2024-12-14 12:43:43.128918] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:43.616 [2024-12-14 12:43:43.128987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.616 [2024-12-14 12:43:43.129004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.616 [2024-12-14 12:43:43.129011] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.616 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.617 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.617 "name": "raid_bdev1", 00:17:43.617 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:43.617 "strip_size_kb": 0, 00:17:43.617 "state": "online", 00:17:43.617 "raid_level": "raid1", 00:17:43.617 "superblock": true, 00:17:43.617 "num_base_bdevs": 2, 00:17:43.617 "num_base_bdevs_discovered": 1, 00:17:43.617 "num_base_bdevs_operational": 1, 00:17:43.617 "base_bdevs_list": [ 00:17:43.617 { 00:17:43.617 "name": null, 00:17:43.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.617 "is_configured": false, 00:17:43.617 "data_offset": 0, 00:17:43.617 "data_size": 7936 00:17:43.617 }, 00:17:43.617 { 00:17:43.617 "name": "BaseBdev2", 00:17:43.617 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:43.617 "is_configured": true, 00:17:43.617 "data_offset": 256, 00:17:43.617 "data_size": 7936 00:17:43.617 } 00:17:43.617 ] 00:17:43.617 }' 00:17:43.617 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.617 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.185 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.185 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.185 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.185 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.185 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.185 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.185 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.185 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.186 "name": "raid_bdev1", 00:17:44.186 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:44.186 "strip_size_kb": 0, 00:17:44.186 "state": "online", 00:17:44.186 "raid_level": "raid1", 00:17:44.186 "superblock": true, 00:17:44.186 "num_base_bdevs": 2, 00:17:44.186 "num_base_bdevs_discovered": 1, 00:17:44.186 "num_base_bdevs_operational": 1, 00:17:44.186 "base_bdevs_list": [ 00:17:44.186 { 00:17:44.186 "name": null, 00:17:44.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.186 "is_configured": false, 00:17:44.186 "data_offset": 0, 00:17:44.186 "data_size": 7936 00:17:44.186 }, 00:17:44.186 { 00:17:44.186 "name": "BaseBdev2", 00:17:44.186 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:44.186 "is_configured": true, 00:17:44.186 "data_offset": 256, 00:17:44.186 "data_size": 7936 00:17:44.186 } 00:17:44.186 ] 00:17:44.186 }' 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.186 [2024-12-14 12:43:43.802312] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:44.186 [2024-12-14 12:43:43.802363] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.186 [2024-12-14 12:43:43.802399] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:44.186 [2024-12-14 12:43:43.802418] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.186 [2024-12-14 12:43:43.802876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.186 [2024-12-14 12:43:43.802903] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:44.186 [2024-12-14 12:43:43.802985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:44.186 [2024-12-14 12:43:43.803000] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.186 [2024-12-14 12:43:43.803010] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:44.186 [2024-12-14 12:43:43.803020] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:44.186 BaseBdev1 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.186 12:43:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.125 "name": "raid_bdev1", 00:17:45.125 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:45.125 "strip_size_kb": 0, 00:17:45.125 "state": "online", 00:17:45.125 "raid_level": "raid1", 00:17:45.125 "superblock": true, 00:17:45.125 "num_base_bdevs": 2, 00:17:45.125 "num_base_bdevs_discovered": 1, 00:17:45.125 "num_base_bdevs_operational": 1, 00:17:45.125 "base_bdevs_list": [ 00:17:45.125 { 00:17:45.125 "name": null, 00:17:45.125 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.125 "is_configured": false, 00:17:45.125 "data_offset": 0, 00:17:45.125 "data_size": 7936 00:17:45.125 }, 00:17:45.125 { 00:17:45.125 "name": "BaseBdev2", 00:17:45.125 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:45.125 "is_configured": true, 00:17:45.125 "data_offset": 256, 00:17:45.125 "data_size": 7936 00:17:45.125 } 00:17:45.125 ] 00:17:45.125 }' 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.125 12:43:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.693 "name": "raid_bdev1", 00:17:45.693 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:45.693 "strip_size_kb": 0, 00:17:45.693 "state": "online", 00:17:45.693 "raid_level": "raid1", 00:17:45.693 "superblock": true, 00:17:45.693 "num_base_bdevs": 2, 00:17:45.693 "num_base_bdevs_discovered": 1, 00:17:45.693 "num_base_bdevs_operational": 1, 00:17:45.693 "base_bdevs_list": [ 00:17:45.693 { 00:17:45.693 "name": null, 00:17:45.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.693 "is_configured": false, 00:17:45.693 "data_offset": 0, 00:17:45.693 "data_size": 7936 00:17:45.693 }, 00:17:45.693 { 00:17:45.693 "name": "BaseBdev2", 00:17:45.693 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:45.693 "is_configured": true, 00:17:45.693 "data_offset": 256, 00:17:45.693 "data_size": 7936 00:17:45.693 } 00:17:45.693 ] 00:17:45.693 }' 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.693 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.693 [2024-12-14 12:43:45.379638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.693 [2024-12-14 12:43:45.379815] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.693 [2024-12-14 12:43:45.379840] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:45.693 request: 00:17:45.693 { 00:17:45.693 "base_bdev": "BaseBdev1", 00:17:45.694 "raid_bdev": "raid_bdev1", 00:17:45.694 "method": "bdev_raid_add_base_bdev", 00:17:45.694 "req_id": 1 00:17:45.694 } 00:17:45.694 Got JSON-RPC error response 00:17:45.694 response: 00:17:45.694 { 00:17:45.694 "code": -22, 00:17:45.694 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:45.694 } 00:17:45.694 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:45.694 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:45.694 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.694 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.694 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.694 12:43:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.073 "name": "raid_bdev1", 00:17:47.073 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:47.073 "strip_size_kb": 0, 00:17:47.073 "state": "online", 00:17:47.073 "raid_level": "raid1", 00:17:47.073 "superblock": true, 00:17:47.073 "num_base_bdevs": 2, 00:17:47.073 "num_base_bdevs_discovered": 1, 00:17:47.073 "num_base_bdevs_operational": 1, 00:17:47.073 "base_bdevs_list": [ 00:17:47.073 { 00:17:47.073 "name": null, 00:17:47.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.073 "is_configured": false, 00:17:47.073 "data_offset": 0, 00:17:47.073 "data_size": 7936 00:17:47.073 }, 00:17:47.073 { 00:17:47.073 "name": "BaseBdev2", 00:17:47.073 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:47.073 "is_configured": true, 00:17:47.073 "data_offset": 256, 00:17:47.073 "data_size": 7936 00:17:47.073 } 00:17:47.073 ] 00:17:47.073 }' 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.073 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.333 "name": "raid_bdev1", 00:17:47.333 "uuid": "aa641f6f-8af0-4148-84b5-adc5f692fa60", 00:17:47.333 "strip_size_kb": 0, 00:17:47.333 "state": "online", 00:17:47.333 "raid_level": "raid1", 00:17:47.333 "superblock": true, 00:17:47.333 "num_base_bdevs": 2, 00:17:47.333 "num_base_bdevs_discovered": 1, 00:17:47.333 "num_base_bdevs_operational": 1, 00:17:47.333 "base_bdevs_list": [ 00:17:47.333 { 00:17:47.333 "name": null, 00:17:47.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.333 "is_configured": false, 00:17:47.333 "data_offset": 0, 00:17:47.333 "data_size": 7936 00:17:47.333 }, 00:17:47.333 { 00:17:47.333 "name": "BaseBdev2", 00:17:47.333 "uuid": "9e70852f-2c94-5908-bb1d-63137972ffb4", 00:17:47.333 "is_configured": true, 00:17:47.333 "data_offset": 256, 00:17:47.333 "data_size": 7936 00:17:47.333 } 00:17:47.333 ] 00:17:47.333 }' 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 88243 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 88243 ']' 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 88243 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.333 12:43:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88243 00:17:47.333 killing process with pid 88243 00:17:47.333 Received shutdown signal, test time was about 60.000000 seconds 00:17:47.333 00:17:47.333 Latency(us) 00:17:47.333 [2024-12-14T12:43:47.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.333 [2024-12-14T12:43:47.071Z] =================================================================================================================== 00:17:47.334 [2024-12-14T12:43:47.072Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.334 12:43:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.334 12:43:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.334 12:43:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88243' 00:17:47.334 12:43:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 88243 00:17:47.334 [2024-12-14 12:43:47.025819] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.334 [2024-12-14 12:43:47.025936] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.334 12:43:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 88243 00:17:47.334 [2024-12-14 12:43:47.025987] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.334 [2024-12-14 12:43:47.025998] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:47.593 [2024-12-14 12:43:47.313962] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:48.973 12:43:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:48.973 00:17:48.973 real 0m19.607s 00:17:48.973 user 0m25.634s 00:17:48.973 sys 0m2.430s 00:17:48.973 12:43:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.973 12:43:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 ************************************ 00:17:48.973 END TEST raid_rebuild_test_sb_4k 00:17:48.973 ************************************ 00:17:48.973 12:43:48 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:48.973 12:43:48 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:48.973 12:43:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:48.973 12:43:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.973 12:43:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 ************************************ 00:17:48.973 START TEST raid_state_function_test_sb_md_separate 00:17:48.973 ************************************ 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=88931 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:48.973 Process raid pid: 88931 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88931' 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 88931 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88931 ']' 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.973 12:43:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:48.973 [2024-12-14 12:43:48.548303] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:48.973 [2024-12-14 12:43:48.548423] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.233 [2024-12-14 12:43:48.717601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.233 [2024-12-14 12:43:48.824342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.493 [2024-12-14 12:43:49.011793] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.493 [2024-12-14 12:43:49.011833] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.752 [2024-12-14 12:43:49.367788] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.752 [2024-12-14 12:43:49.367834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.752 [2024-12-14 12:43:49.367844] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.752 [2024-12-14 12:43:49.367853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.752 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.753 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.753 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.753 "name": "Existed_Raid", 00:17:49.753 "uuid": "4981145b-a9d3-4746-95c1-1400ed1fa719", 00:17:49.753 "strip_size_kb": 0, 00:17:49.753 "state": "configuring", 00:17:49.753 "raid_level": "raid1", 00:17:49.753 "superblock": true, 00:17:49.753 "num_base_bdevs": 2, 00:17:49.753 "num_base_bdevs_discovered": 0, 00:17:49.753 "num_base_bdevs_operational": 2, 00:17:49.753 "base_bdevs_list": [ 00:17:49.753 { 00:17:49.753 "name": "BaseBdev1", 00:17:49.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.753 "is_configured": false, 00:17:49.753 "data_offset": 0, 00:17:49.753 "data_size": 0 00:17:49.753 }, 00:17:49.753 { 00:17:49.753 "name": "BaseBdev2", 00:17:49.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.753 "is_configured": false, 00:17:49.753 "data_offset": 0, 00:17:49.753 "data_size": 0 00:17:49.753 } 00:17:49.753 ] 00:17:49.753 }' 00:17:49.753 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.753 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.323 [2024-12-14 12:43:49.834935] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.323 [2024-12-14 12:43:49.834972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.323 [2024-12-14 12:43:49.846913] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.323 [2024-12-14 12:43:49.846948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.323 [2024-12-14 12:43:49.846958] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.323 [2024-12-14 12:43:49.846969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.323 [2024-12-14 12:43:49.893274] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.323 BaseBdev1 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.323 [ 00:17:50.323 { 00:17:50.323 "name": "BaseBdev1", 00:17:50.323 "aliases": [ 00:17:50.323 "7cb740ce-7383-42ce-9be8-3dc716d2d0cc" 00:17:50.323 ], 00:17:50.323 "product_name": "Malloc disk", 00:17:50.323 "block_size": 4096, 00:17:50.323 "num_blocks": 8192, 00:17:50.323 "uuid": "7cb740ce-7383-42ce-9be8-3dc716d2d0cc", 00:17:50.323 "md_size": 32, 00:17:50.323 "md_interleave": false, 00:17:50.323 "dif_type": 0, 00:17:50.323 "assigned_rate_limits": { 00:17:50.323 "rw_ios_per_sec": 0, 00:17:50.323 "rw_mbytes_per_sec": 0, 00:17:50.323 "r_mbytes_per_sec": 0, 00:17:50.323 "w_mbytes_per_sec": 0 00:17:50.323 }, 00:17:50.323 "claimed": true, 00:17:50.323 "claim_type": "exclusive_write", 00:17:50.323 "zoned": false, 00:17:50.323 "supported_io_types": { 00:17:50.323 "read": true, 00:17:50.323 "write": true, 00:17:50.323 "unmap": true, 00:17:50.323 "flush": true, 00:17:50.323 "reset": true, 00:17:50.323 "nvme_admin": false, 00:17:50.323 "nvme_io": false, 00:17:50.323 "nvme_io_md": false, 00:17:50.323 "write_zeroes": true, 00:17:50.323 "zcopy": true, 00:17:50.323 "get_zone_info": false, 00:17:50.323 "zone_management": false, 00:17:50.323 "zone_append": false, 00:17:50.323 "compare": false, 00:17:50.323 "compare_and_write": false, 00:17:50.323 "abort": true, 00:17:50.323 "seek_hole": false, 00:17:50.323 "seek_data": false, 00:17:50.323 "copy": true, 00:17:50.323 "nvme_iov_md": false 00:17:50.323 }, 00:17:50.323 "memory_domains": [ 00:17:50.323 { 00:17:50.323 "dma_device_id": "system", 00:17:50.323 "dma_device_type": 1 00:17:50.323 }, 00:17:50.323 { 00:17:50.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.323 "dma_device_type": 2 00:17:50.323 } 00:17:50.323 ], 00:17:50.323 "driver_specific": {} 00:17:50.323 } 00:17:50.323 ] 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.323 "name": "Existed_Raid", 00:17:50.323 "uuid": "4f2db055-c9fb-4091-a028-7d7298751d23", 00:17:50.323 "strip_size_kb": 0, 00:17:50.323 "state": "configuring", 00:17:50.323 "raid_level": "raid1", 00:17:50.323 "superblock": true, 00:17:50.323 "num_base_bdevs": 2, 00:17:50.323 "num_base_bdevs_discovered": 1, 00:17:50.323 "num_base_bdevs_operational": 2, 00:17:50.323 "base_bdevs_list": [ 00:17:50.323 { 00:17:50.323 "name": "BaseBdev1", 00:17:50.323 "uuid": "7cb740ce-7383-42ce-9be8-3dc716d2d0cc", 00:17:50.323 "is_configured": true, 00:17:50.323 "data_offset": 256, 00:17:50.323 "data_size": 7936 00:17:50.323 }, 00:17:50.323 { 00:17:50.323 "name": "BaseBdev2", 00:17:50.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.323 "is_configured": false, 00:17:50.323 "data_offset": 0, 00:17:50.323 "data_size": 0 00:17:50.323 } 00:17:50.323 ] 00:17:50.323 }' 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.323 12:43:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.899 [2024-12-14 12:43:50.396470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.899 [2024-12-14 12:43:50.396522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.899 [2024-12-14 12:43:50.408499] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.899 [2024-12-14 12:43:50.410217] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.899 [2024-12-14 12:43:50.410254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.899 "name": "Existed_Raid", 00:17:50.899 "uuid": "011d2c32-ef40-47b6-ab0b-440f3c47c17c", 00:17:50.899 "strip_size_kb": 0, 00:17:50.899 "state": "configuring", 00:17:50.899 "raid_level": "raid1", 00:17:50.899 "superblock": true, 00:17:50.899 "num_base_bdevs": 2, 00:17:50.899 "num_base_bdevs_discovered": 1, 00:17:50.899 "num_base_bdevs_operational": 2, 00:17:50.899 "base_bdevs_list": [ 00:17:50.899 { 00:17:50.899 "name": "BaseBdev1", 00:17:50.899 "uuid": "7cb740ce-7383-42ce-9be8-3dc716d2d0cc", 00:17:50.899 "is_configured": true, 00:17:50.899 "data_offset": 256, 00:17:50.899 "data_size": 7936 00:17:50.899 }, 00:17:50.899 { 00:17:50.899 "name": "BaseBdev2", 00:17:50.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.899 "is_configured": false, 00:17:50.899 "data_offset": 0, 00:17:50.899 "data_size": 0 00:17:50.899 } 00:17:50.899 ] 00:17:50.899 }' 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.899 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.158 [2024-12-14 12:43:50.879996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.158 [2024-12-14 12:43:50.880255] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:51.158 [2024-12-14 12:43:50.880271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:51.158 [2024-12-14 12:43:50.880350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:51.158 [2024-12-14 12:43:50.880479] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:51.158 [2024-12-14 12:43:50.880498] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:51.158 [2024-12-14 12:43:50.880592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.158 BaseBdev2 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.158 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.418 [ 00:17:51.418 { 00:17:51.418 "name": "BaseBdev2", 00:17:51.418 "aliases": [ 00:17:51.418 "55ef53ec-4f28-4afd-9684-3c28cd283c65" 00:17:51.418 ], 00:17:51.418 "product_name": "Malloc disk", 00:17:51.418 "block_size": 4096, 00:17:51.418 "num_blocks": 8192, 00:17:51.418 "uuid": "55ef53ec-4f28-4afd-9684-3c28cd283c65", 00:17:51.418 "md_size": 32, 00:17:51.418 "md_interleave": false, 00:17:51.418 "dif_type": 0, 00:17:51.418 "assigned_rate_limits": { 00:17:51.418 "rw_ios_per_sec": 0, 00:17:51.418 "rw_mbytes_per_sec": 0, 00:17:51.418 "r_mbytes_per_sec": 0, 00:17:51.418 "w_mbytes_per_sec": 0 00:17:51.418 }, 00:17:51.418 "claimed": true, 00:17:51.418 "claim_type": "exclusive_write", 00:17:51.418 "zoned": false, 00:17:51.418 "supported_io_types": { 00:17:51.418 "read": true, 00:17:51.418 "write": true, 00:17:51.418 "unmap": true, 00:17:51.418 "flush": true, 00:17:51.418 "reset": true, 00:17:51.418 "nvme_admin": false, 00:17:51.418 "nvme_io": false, 00:17:51.418 "nvme_io_md": false, 00:17:51.418 "write_zeroes": true, 00:17:51.418 "zcopy": true, 00:17:51.418 "get_zone_info": false, 00:17:51.418 "zone_management": false, 00:17:51.418 "zone_append": false, 00:17:51.418 "compare": false, 00:17:51.418 "compare_and_write": false, 00:17:51.418 "abort": true, 00:17:51.418 "seek_hole": false, 00:17:51.418 "seek_data": false, 00:17:51.418 "copy": true, 00:17:51.418 "nvme_iov_md": false 00:17:51.418 }, 00:17:51.418 "memory_domains": [ 00:17:51.418 { 00:17:51.418 "dma_device_id": "system", 00:17:51.418 "dma_device_type": 1 00:17:51.418 }, 00:17:51.418 { 00:17:51.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.418 "dma_device_type": 2 00:17:51.418 } 00:17:51.418 ], 00:17:51.418 "driver_specific": {} 00:17:51.418 } 00:17:51.418 ] 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.418 "name": "Existed_Raid", 00:17:51.418 "uuid": "011d2c32-ef40-47b6-ab0b-440f3c47c17c", 00:17:51.418 "strip_size_kb": 0, 00:17:51.418 "state": "online", 00:17:51.418 "raid_level": "raid1", 00:17:51.418 "superblock": true, 00:17:51.418 "num_base_bdevs": 2, 00:17:51.418 "num_base_bdevs_discovered": 2, 00:17:51.418 "num_base_bdevs_operational": 2, 00:17:51.418 "base_bdevs_list": [ 00:17:51.418 { 00:17:51.418 "name": "BaseBdev1", 00:17:51.418 "uuid": "7cb740ce-7383-42ce-9be8-3dc716d2d0cc", 00:17:51.418 "is_configured": true, 00:17:51.418 "data_offset": 256, 00:17:51.418 "data_size": 7936 00:17:51.418 }, 00:17:51.418 { 00:17:51.418 "name": "BaseBdev2", 00:17:51.418 "uuid": "55ef53ec-4f28-4afd-9684-3c28cd283c65", 00:17:51.418 "is_configured": true, 00:17:51.418 "data_offset": 256, 00:17:51.418 "data_size": 7936 00:17:51.418 } 00:17:51.418 ] 00:17:51.418 }' 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.418 12:43:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.678 [2024-12-14 12:43:51.355501] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:51.678 "name": "Existed_Raid", 00:17:51.678 "aliases": [ 00:17:51.678 "011d2c32-ef40-47b6-ab0b-440f3c47c17c" 00:17:51.678 ], 00:17:51.678 "product_name": "Raid Volume", 00:17:51.678 "block_size": 4096, 00:17:51.678 "num_blocks": 7936, 00:17:51.678 "uuid": "011d2c32-ef40-47b6-ab0b-440f3c47c17c", 00:17:51.678 "md_size": 32, 00:17:51.678 "md_interleave": false, 00:17:51.678 "dif_type": 0, 00:17:51.678 "assigned_rate_limits": { 00:17:51.678 "rw_ios_per_sec": 0, 00:17:51.678 "rw_mbytes_per_sec": 0, 00:17:51.678 "r_mbytes_per_sec": 0, 00:17:51.678 "w_mbytes_per_sec": 0 00:17:51.678 }, 00:17:51.678 "claimed": false, 00:17:51.678 "zoned": false, 00:17:51.678 "supported_io_types": { 00:17:51.678 "read": true, 00:17:51.678 "write": true, 00:17:51.678 "unmap": false, 00:17:51.678 "flush": false, 00:17:51.678 "reset": true, 00:17:51.678 "nvme_admin": false, 00:17:51.678 "nvme_io": false, 00:17:51.678 "nvme_io_md": false, 00:17:51.678 "write_zeroes": true, 00:17:51.678 "zcopy": false, 00:17:51.678 "get_zone_info": false, 00:17:51.678 "zone_management": false, 00:17:51.678 "zone_append": false, 00:17:51.678 "compare": false, 00:17:51.678 "compare_and_write": false, 00:17:51.678 "abort": false, 00:17:51.678 "seek_hole": false, 00:17:51.678 "seek_data": false, 00:17:51.678 "copy": false, 00:17:51.678 "nvme_iov_md": false 00:17:51.678 }, 00:17:51.678 "memory_domains": [ 00:17:51.678 { 00:17:51.678 "dma_device_id": "system", 00:17:51.678 "dma_device_type": 1 00:17:51.678 }, 00:17:51.678 { 00:17:51.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.678 "dma_device_type": 2 00:17:51.678 }, 00:17:51.678 { 00:17:51.678 "dma_device_id": "system", 00:17:51.678 "dma_device_type": 1 00:17:51.678 }, 00:17:51.678 { 00:17:51.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.678 "dma_device_type": 2 00:17:51.678 } 00:17:51.678 ], 00:17:51.678 "driver_specific": { 00:17:51.678 "raid": { 00:17:51.678 "uuid": "011d2c32-ef40-47b6-ab0b-440f3c47c17c", 00:17:51.678 "strip_size_kb": 0, 00:17:51.678 "state": "online", 00:17:51.678 "raid_level": "raid1", 00:17:51.678 "superblock": true, 00:17:51.678 "num_base_bdevs": 2, 00:17:51.678 "num_base_bdevs_discovered": 2, 00:17:51.678 "num_base_bdevs_operational": 2, 00:17:51.678 "base_bdevs_list": [ 00:17:51.678 { 00:17:51.678 "name": "BaseBdev1", 00:17:51.678 "uuid": "7cb740ce-7383-42ce-9be8-3dc716d2d0cc", 00:17:51.678 "is_configured": true, 00:17:51.678 "data_offset": 256, 00:17:51.678 "data_size": 7936 00:17:51.678 }, 00:17:51.678 { 00:17:51.678 "name": "BaseBdev2", 00:17:51.678 "uuid": "55ef53ec-4f28-4afd-9684-3c28cd283c65", 00:17:51.678 "is_configured": true, 00:17:51.678 "data_offset": 256, 00:17:51.678 "data_size": 7936 00:17:51.678 } 00:17:51.678 ] 00:17:51.678 } 00:17:51.678 } 00:17:51.678 }' 00:17:51.678 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:51.938 BaseBdev2' 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.938 [2024-12-14 12:43:51.570892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.938 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.198 "name": "Existed_Raid", 00:17:52.198 "uuid": "011d2c32-ef40-47b6-ab0b-440f3c47c17c", 00:17:52.198 "strip_size_kb": 0, 00:17:52.198 "state": "online", 00:17:52.198 "raid_level": "raid1", 00:17:52.198 "superblock": true, 00:17:52.198 "num_base_bdevs": 2, 00:17:52.198 "num_base_bdevs_discovered": 1, 00:17:52.198 "num_base_bdevs_operational": 1, 00:17:52.198 "base_bdevs_list": [ 00:17:52.198 { 00:17:52.198 "name": null, 00:17:52.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.198 "is_configured": false, 00:17:52.198 "data_offset": 0, 00:17:52.198 "data_size": 7936 00:17:52.198 }, 00:17:52.198 { 00:17:52.198 "name": "BaseBdev2", 00:17:52.198 "uuid": "55ef53ec-4f28-4afd-9684-3c28cd283c65", 00:17:52.198 "is_configured": true, 00:17:52.198 "data_offset": 256, 00:17:52.198 "data_size": 7936 00:17:52.198 } 00:17:52.198 ] 00:17:52.198 }' 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.198 12:43:51 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.458 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.458 [2024-12-14 12:43:52.153627] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:52.458 [2024-12-14 12:43:52.153732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.718 [2024-12-14 12:43:52.252680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.718 [2024-12-14 12:43:52.252731] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.718 [2024-12-14 12:43:52.252743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 88931 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88931 ']' 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88931 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88931 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.718 killing process with pid 88931 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88931' 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88931 00:17:52.718 [2024-12-14 12:43:52.349139] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.718 12:43:52 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88931 00:17:52.718 [2024-12-14 12:43:52.364944] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.099 12:43:53 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:54.099 00:17:54.099 real 0m4.994s 00:17:54.099 user 0m7.229s 00:17:54.099 sys 0m0.826s 00:17:54.099 12:43:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.099 12:43:53 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.099 ************************************ 00:17:54.099 END TEST raid_state_function_test_sb_md_separate 00:17:54.099 ************************************ 00:17:54.099 12:43:53 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:54.099 12:43:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:54.099 12:43:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.099 12:43:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.099 ************************************ 00:17:54.099 START TEST raid_superblock_test_md_separate 00:17:54.099 ************************************ 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=89178 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 89178 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 89178 ']' 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.099 12:43:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.100 12:43:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.100 12:43:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.100 12:43:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.100 [2024-12-14 12:43:53.604406] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:54.100 [2024-12-14 12:43:53.604513] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89178 ] 00:17:54.100 [2024-12-14 12:43:53.776113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.360 [2024-12-14 12:43:53.886615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.360 [2024-12-14 12:43:54.071511] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.360 [2024-12-14 12:43:54.071570] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.928 malloc1 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.928 [2024-12-14 12:43:54.479410] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:54.928 [2024-12-14 12:43:54.479460] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.928 [2024-12-14 12:43:54.479483] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:54.928 [2024-12-14 12:43:54.479492] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.928 [2024-12-14 12:43:54.481349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.928 [2024-12-14 12:43:54.481381] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:54.928 pt1 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.928 malloc2 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.928 [2024-12-14 12:43:54.533375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.928 [2024-12-14 12:43:54.533420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.928 [2024-12-14 12:43:54.533439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:54.928 [2024-12-14 12:43:54.533448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.928 [2024-12-14 12:43:54.535318] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.928 [2024-12-14 12:43:54.535348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.928 pt2 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.928 [2024-12-14 12:43:54.545377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:54.928 [2024-12-14 12:43:54.547169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:54.928 [2024-12-14 12:43:54.547337] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:54.928 [2024-12-14 12:43:54.547351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:54.928 [2024-12-14 12:43:54.547418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:54.928 [2024-12-14 12:43:54.547525] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:54.928 [2024-12-14 12:43:54.547544] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:54.928 [2024-12-14 12:43:54.547667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.928 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.929 "name": "raid_bdev1", 00:17:54.929 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:54.929 "strip_size_kb": 0, 00:17:54.929 "state": "online", 00:17:54.929 "raid_level": "raid1", 00:17:54.929 "superblock": true, 00:17:54.929 "num_base_bdevs": 2, 00:17:54.929 "num_base_bdevs_discovered": 2, 00:17:54.929 "num_base_bdevs_operational": 2, 00:17:54.929 "base_bdevs_list": [ 00:17:54.929 { 00:17:54.929 "name": "pt1", 00:17:54.929 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:54.929 "is_configured": true, 00:17:54.929 "data_offset": 256, 00:17:54.929 "data_size": 7936 00:17:54.929 }, 00:17:54.929 { 00:17:54.929 "name": "pt2", 00:17:54.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:54.929 "is_configured": true, 00:17:54.929 "data_offset": 256, 00:17:54.929 "data_size": 7936 00:17:54.929 } 00:17:54.929 ] 00:17:54.929 }' 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.929 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:55.498 [2024-12-14 12:43:54.976908] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.498 12:43:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.498 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.498 "name": "raid_bdev1", 00:17:55.498 "aliases": [ 00:17:55.498 "12d198a2-0d55-46e2-86b8-9cd96dc64a0c" 00:17:55.498 ], 00:17:55.498 "product_name": "Raid Volume", 00:17:55.498 "block_size": 4096, 00:17:55.498 "num_blocks": 7936, 00:17:55.498 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:55.498 "md_size": 32, 00:17:55.498 "md_interleave": false, 00:17:55.498 "dif_type": 0, 00:17:55.498 "assigned_rate_limits": { 00:17:55.498 "rw_ios_per_sec": 0, 00:17:55.498 "rw_mbytes_per_sec": 0, 00:17:55.499 "r_mbytes_per_sec": 0, 00:17:55.499 "w_mbytes_per_sec": 0 00:17:55.499 }, 00:17:55.499 "claimed": false, 00:17:55.499 "zoned": false, 00:17:55.499 "supported_io_types": { 00:17:55.499 "read": true, 00:17:55.499 "write": true, 00:17:55.499 "unmap": false, 00:17:55.499 "flush": false, 00:17:55.499 "reset": true, 00:17:55.499 "nvme_admin": false, 00:17:55.499 "nvme_io": false, 00:17:55.499 "nvme_io_md": false, 00:17:55.499 "write_zeroes": true, 00:17:55.499 "zcopy": false, 00:17:55.499 "get_zone_info": false, 00:17:55.499 "zone_management": false, 00:17:55.499 "zone_append": false, 00:17:55.499 "compare": false, 00:17:55.499 "compare_and_write": false, 00:17:55.499 "abort": false, 00:17:55.499 "seek_hole": false, 00:17:55.499 "seek_data": false, 00:17:55.499 "copy": false, 00:17:55.499 "nvme_iov_md": false 00:17:55.499 }, 00:17:55.499 "memory_domains": [ 00:17:55.499 { 00:17:55.499 "dma_device_id": "system", 00:17:55.499 "dma_device_type": 1 00:17:55.499 }, 00:17:55.499 { 00:17:55.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.499 "dma_device_type": 2 00:17:55.499 }, 00:17:55.499 { 00:17:55.499 "dma_device_id": "system", 00:17:55.499 "dma_device_type": 1 00:17:55.499 }, 00:17:55.499 { 00:17:55.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.499 "dma_device_type": 2 00:17:55.499 } 00:17:55.499 ], 00:17:55.499 "driver_specific": { 00:17:55.499 "raid": { 00:17:55.499 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:55.499 "strip_size_kb": 0, 00:17:55.499 "state": "online", 00:17:55.499 "raid_level": "raid1", 00:17:55.499 "superblock": true, 00:17:55.499 "num_base_bdevs": 2, 00:17:55.499 "num_base_bdevs_discovered": 2, 00:17:55.499 "num_base_bdevs_operational": 2, 00:17:55.499 "base_bdevs_list": [ 00:17:55.499 { 00:17:55.499 "name": "pt1", 00:17:55.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.499 "is_configured": true, 00:17:55.499 "data_offset": 256, 00:17:55.499 "data_size": 7936 00:17:55.499 }, 00:17:55.499 { 00:17:55.499 "name": "pt2", 00:17:55.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.499 "is_configured": true, 00:17:55.499 "data_offset": 256, 00:17:55.499 "data_size": 7936 00:17:55.499 } 00:17:55.499 ] 00:17:55.499 } 00:17:55.499 } 00:17:55.499 }' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:55.499 pt2' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:55.499 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.499 [2024-12-14 12:43:55.216434] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=12d198a2-0d55-46e2-86b8-9cd96dc64a0c 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 12d198a2-0d55-46e2-86b8-9cd96dc64a0c ']' 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.759 [2024-12-14 12:43:55.264115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.759 [2024-12-14 12:43:55.264140] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.759 [2024-12-14 12:43:55.264221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.759 [2024-12-14 12:43:55.264279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.759 [2024-12-14 12:43:55.264291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.759 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.760 [2024-12-14 12:43:55.395900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:55.760 [2024-12-14 12:43:55.397745] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:55.760 [2024-12-14 12:43:55.397828] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:55.760 [2024-12-14 12:43:55.397878] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:55.760 [2024-12-14 12:43:55.397892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.760 [2024-12-14 12:43:55.397902] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:55.760 request: 00:17:55.760 { 00:17:55.760 "name": "raid_bdev1", 00:17:55.760 "raid_level": "raid1", 00:17:55.760 "base_bdevs": [ 00:17:55.760 "malloc1", 00:17:55.760 "malloc2" 00:17:55.760 ], 00:17:55.760 "superblock": false, 00:17:55.760 "method": "bdev_raid_create", 00:17:55.760 "req_id": 1 00:17:55.760 } 00:17:55.760 Got JSON-RPC error response 00:17:55.760 response: 00:17:55.760 { 00:17:55.760 "code": -17, 00:17:55.760 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:55.760 } 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.760 [2024-12-14 12:43:55.447780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:55.760 [2024-12-14 12:43:55.447831] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.760 [2024-12-14 12:43:55.447847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:55.760 [2024-12-14 12:43:55.447858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.760 [2024-12-14 12:43:55.449815] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.760 [2024-12-14 12:43:55.449855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:55.760 [2024-12-14 12:43:55.449902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:55.760 [2024-12-14 12:43:55.449953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.760 pt1 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.760 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.020 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.020 "name": "raid_bdev1", 00:17:56.020 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:56.020 "strip_size_kb": 0, 00:17:56.020 "state": "configuring", 00:17:56.020 "raid_level": "raid1", 00:17:56.020 "superblock": true, 00:17:56.020 "num_base_bdevs": 2, 00:17:56.020 "num_base_bdevs_discovered": 1, 00:17:56.020 "num_base_bdevs_operational": 2, 00:17:56.020 "base_bdevs_list": [ 00:17:56.020 { 00:17:56.020 "name": "pt1", 00:17:56.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.020 "is_configured": true, 00:17:56.020 "data_offset": 256, 00:17:56.020 "data_size": 7936 00:17:56.020 }, 00:17:56.020 { 00:17:56.020 "name": null, 00:17:56.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.020 "is_configured": false, 00:17:56.020 "data_offset": 256, 00:17:56.020 "data_size": 7936 00:17:56.020 } 00:17:56.020 ] 00:17:56.020 }' 00:17:56.020 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.020 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.279 [2024-12-14 12:43:55.875100] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.279 [2024-12-14 12:43:55.875238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.279 [2024-12-14 12:43:55.875280] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:56.279 [2024-12-14 12:43:55.875311] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.279 [2024-12-14 12:43:55.875566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.279 [2024-12-14 12:43:55.875624] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.279 [2024-12-14 12:43:55.875705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:56.279 [2024-12-14 12:43:55.875755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.279 [2024-12-14 12:43:55.875902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:56.279 [2024-12-14 12:43:55.875942] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.279 [2024-12-14 12:43:55.876051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:56.279 [2024-12-14 12:43:55.876205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:56.279 [2024-12-14 12:43:55.876242] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:56.279 [2024-12-14 12:43:55.876383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.279 pt2 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.279 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.280 "name": "raid_bdev1", 00:17:56.280 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:56.280 "strip_size_kb": 0, 00:17:56.280 "state": "online", 00:17:56.280 "raid_level": "raid1", 00:17:56.280 "superblock": true, 00:17:56.280 "num_base_bdevs": 2, 00:17:56.280 "num_base_bdevs_discovered": 2, 00:17:56.280 "num_base_bdevs_operational": 2, 00:17:56.280 "base_bdevs_list": [ 00:17:56.280 { 00:17:56.280 "name": "pt1", 00:17:56.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.280 "is_configured": true, 00:17:56.280 "data_offset": 256, 00:17:56.280 "data_size": 7936 00:17:56.280 }, 00:17:56.280 { 00:17:56.280 "name": "pt2", 00:17:56.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.280 "is_configured": true, 00:17:56.280 "data_offset": 256, 00:17:56.280 "data_size": 7936 00:17:56.280 } 00:17:56.280 ] 00:17:56.280 }' 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.280 12:43:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.849 [2024-12-14 12:43:56.346555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:56.849 "name": "raid_bdev1", 00:17:56.849 "aliases": [ 00:17:56.849 "12d198a2-0d55-46e2-86b8-9cd96dc64a0c" 00:17:56.849 ], 00:17:56.849 "product_name": "Raid Volume", 00:17:56.849 "block_size": 4096, 00:17:56.849 "num_blocks": 7936, 00:17:56.849 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:56.849 "md_size": 32, 00:17:56.849 "md_interleave": false, 00:17:56.849 "dif_type": 0, 00:17:56.849 "assigned_rate_limits": { 00:17:56.849 "rw_ios_per_sec": 0, 00:17:56.849 "rw_mbytes_per_sec": 0, 00:17:56.849 "r_mbytes_per_sec": 0, 00:17:56.849 "w_mbytes_per_sec": 0 00:17:56.849 }, 00:17:56.849 "claimed": false, 00:17:56.849 "zoned": false, 00:17:56.849 "supported_io_types": { 00:17:56.849 "read": true, 00:17:56.849 "write": true, 00:17:56.849 "unmap": false, 00:17:56.849 "flush": false, 00:17:56.849 "reset": true, 00:17:56.849 "nvme_admin": false, 00:17:56.849 "nvme_io": false, 00:17:56.849 "nvme_io_md": false, 00:17:56.849 "write_zeroes": true, 00:17:56.849 "zcopy": false, 00:17:56.849 "get_zone_info": false, 00:17:56.849 "zone_management": false, 00:17:56.849 "zone_append": false, 00:17:56.849 "compare": false, 00:17:56.849 "compare_and_write": false, 00:17:56.849 "abort": false, 00:17:56.849 "seek_hole": false, 00:17:56.849 "seek_data": false, 00:17:56.849 "copy": false, 00:17:56.849 "nvme_iov_md": false 00:17:56.849 }, 00:17:56.849 "memory_domains": [ 00:17:56.849 { 00:17:56.849 "dma_device_id": "system", 00:17:56.849 "dma_device_type": 1 00:17:56.849 }, 00:17:56.849 { 00:17:56.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.849 "dma_device_type": 2 00:17:56.849 }, 00:17:56.849 { 00:17:56.849 "dma_device_id": "system", 00:17:56.849 "dma_device_type": 1 00:17:56.849 }, 00:17:56.849 { 00:17:56.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.849 "dma_device_type": 2 00:17:56.849 } 00:17:56.849 ], 00:17:56.849 "driver_specific": { 00:17:56.849 "raid": { 00:17:56.849 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:56.849 "strip_size_kb": 0, 00:17:56.849 "state": "online", 00:17:56.849 "raid_level": "raid1", 00:17:56.849 "superblock": true, 00:17:56.849 "num_base_bdevs": 2, 00:17:56.849 "num_base_bdevs_discovered": 2, 00:17:56.849 "num_base_bdevs_operational": 2, 00:17:56.849 "base_bdevs_list": [ 00:17:56.849 { 00:17:56.849 "name": "pt1", 00:17:56.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.849 "is_configured": true, 00:17:56.849 "data_offset": 256, 00:17:56.849 "data_size": 7936 00:17:56.849 }, 00:17:56.849 { 00:17:56.849 "name": "pt2", 00:17:56.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.849 "is_configured": true, 00:17:56.849 "data_offset": 256, 00:17:56.849 "data_size": 7936 00:17:56.849 } 00:17:56.849 ] 00:17:56.849 } 00:17:56.849 } 00:17:56.849 }' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:56.849 pt2' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.849 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:56.849 [2024-12-14 12:43:56.554207] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.850 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 12d198a2-0d55-46e2-86b8-9cd96dc64a0c '!=' 12d198a2-0d55-46e2-86b8-9cd96dc64a0c ']' 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.108 [2024-12-14 12:43:56.601891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.108 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.109 "name": "raid_bdev1", 00:17:57.109 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:57.109 "strip_size_kb": 0, 00:17:57.109 "state": "online", 00:17:57.109 "raid_level": "raid1", 00:17:57.109 "superblock": true, 00:17:57.109 "num_base_bdevs": 2, 00:17:57.109 "num_base_bdevs_discovered": 1, 00:17:57.109 "num_base_bdevs_operational": 1, 00:17:57.109 "base_bdevs_list": [ 00:17:57.109 { 00:17:57.109 "name": null, 00:17:57.109 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.109 "is_configured": false, 00:17:57.109 "data_offset": 0, 00:17:57.109 "data_size": 7936 00:17:57.109 }, 00:17:57.109 { 00:17:57.109 "name": "pt2", 00:17:57.109 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.109 "is_configured": true, 00:17:57.109 "data_offset": 256, 00:17:57.109 "data_size": 7936 00:17:57.109 } 00:17:57.109 ] 00:17:57.109 }' 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.109 12:43:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.368 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.368 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.368 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.368 [2024-12-14 12:43:57.053107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.368 [2024-12-14 12:43:57.053187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.368 [2024-12-14 12:43:57.053291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.368 [2024-12-14 12:43:57.053373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.368 [2024-12-14 12:43:57.053421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:57.369 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.369 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.369 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.369 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.369 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:57.369 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.629 [2024-12-14 12:43:57.128940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.629 [2024-12-14 12:43:57.129047] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.629 [2024-12-14 12:43:57.129081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:57.629 [2024-12-14 12:43:57.129124] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.629 [2024-12-14 12:43:57.131167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.629 [2024-12-14 12:43:57.131242] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.629 [2024-12-14 12:43:57.131323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:57.629 [2024-12-14 12:43:57.131427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.629 [2024-12-14 12:43:57.131572] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:57.629 [2024-12-14 12:43:57.131617] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.629 [2024-12-14 12:43:57.131717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:57.629 [2024-12-14 12:43:57.131867] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:57.629 [2024-12-14 12:43:57.131875] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:57.629 [2024-12-14 12:43:57.131987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.629 pt2 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.629 "name": "raid_bdev1", 00:17:57.629 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:57.629 "strip_size_kb": 0, 00:17:57.629 "state": "online", 00:17:57.629 "raid_level": "raid1", 00:17:57.629 "superblock": true, 00:17:57.629 "num_base_bdevs": 2, 00:17:57.629 "num_base_bdevs_discovered": 1, 00:17:57.629 "num_base_bdevs_operational": 1, 00:17:57.629 "base_bdevs_list": [ 00:17:57.629 { 00:17:57.629 "name": null, 00:17:57.629 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.629 "is_configured": false, 00:17:57.629 "data_offset": 256, 00:17:57.629 "data_size": 7936 00:17:57.629 }, 00:17:57.629 { 00:17:57.629 "name": "pt2", 00:17:57.629 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.629 "is_configured": true, 00:17:57.629 "data_offset": 256, 00:17:57.629 "data_size": 7936 00:17:57.629 } 00:17:57.629 ] 00:17:57.629 }' 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.629 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.889 [2024-12-14 12:43:57.560195] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.889 [2024-12-14 12:43:57.560227] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.889 [2024-12-14 12:43:57.560303] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.889 [2024-12-14 12:43:57.560356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.889 [2024-12-14 12:43:57.560365] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.889 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.889 [2024-12-14 12:43:57.620130] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:57.889 [2024-12-14 12:43:57.620204] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.889 [2024-12-14 12:43:57.620226] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:57.889 [2024-12-14 12:43:57.620235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.889 [2024-12-14 12:43:57.622282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.889 [2024-12-14 12:43:57.622318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:57.889 [2024-12-14 12:43:57.622379] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:57.889 [2024-12-14 12:43:57.622433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:57.889 [2024-12-14 12:43:57.622563] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:57.889 [2024-12-14 12:43:57.622572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.889 [2024-12-14 12:43:57.622601] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:57.889 [2024-12-14 12:43:57.622710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.889 [2024-12-14 12:43:57.622786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:57.889 [2024-12-14 12:43:57.622795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.889 [2024-12-14 12:43:57.622858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:57.889 [2024-12-14 12:43:57.622996] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:57.889 [2024-12-14 12:43:57.623013] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:57.889 [2024-12-14 12:43:57.623220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.889 pt1 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.149 "name": "raid_bdev1", 00:17:58.149 "uuid": "12d198a2-0d55-46e2-86b8-9cd96dc64a0c", 00:17:58.149 "strip_size_kb": 0, 00:17:58.149 "state": "online", 00:17:58.149 "raid_level": "raid1", 00:17:58.149 "superblock": true, 00:17:58.149 "num_base_bdevs": 2, 00:17:58.149 "num_base_bdevs_discovered": 1, 00:17:58.149 "num_base_bdevs_operational": 1, 00:17:58.149 "base_bdevs_list": [ 00:17:58.149 { 00:17:58.149 "name": null, 00:17:58.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.149 "is_configured": false, 00:17:58.149 "data_offset": 256, 00:17:58.149 "data_size": 7936 00:17:58.149 }, 00:17:58.149 { 00:17:58.149 "name": "pt2", 00:17:58.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.149 "is_configured": true, 00:17:58.149 "data_offset": 256, 00:17:58.149 "data_size": 7936 00:17:58.149 } 00:17:58.149 ] 00:17:58.149 }' 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.149 12:43:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.414 [2024-12-14 12:43:58.091535] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 12d198a2-0d55-46e2-86b8-9cd96dc64a0c '!=' 12d198a2-0d55-46e2-86b8-9cd96dc64a0c ']' 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 89178 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 89178 ']' 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 89178 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.414 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89178 00:17:58.681 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.681 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.681 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89178' 00:17:58.681 killing process with pid 89178 00:17:58.681 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 89178 00:17:58.681 [2024-12-14 12:43:58.171624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.681 [2024-12-14 12:43:58.171718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.681 [2024-12-14 12:43:58.171768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.681 [2024-12-14 12:43:58.171787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:58.681 12:43:58 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 89178 00:17:58.681 [2024-12-14 12:43:58.393883] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:00.061 12:43:59 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:00.061 00:18:00.061 real 0m5.966s 00:18:00.061 user 0m9.074s 00:18:00.061 sys 0m0.996s 00:18:00.061 12:43:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.061 ************************************ 00:18:00.061 END TEST raid_superblock_test_md_separate 00:18:00.061 ************************************ 00:18:00.061 12:43:59 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.061 12:43:59 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:00.061 12:43:59 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:00.061 12:43:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:00.061 12:43:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.061 12:43:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:00.061 ************************************ 00:18:00.061 START TEST raid_rebuild_test_sb_md_separate 00:18:00.061 ************************************ 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=89507 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 89507 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 89507 ']' 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:00.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.061 12:43:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.061 [2024-12-14 12:43:59.650604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:00.061 [2024-12-14 12:43:59.650802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89507 ] 00:18:00.061 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:00.061 Zero copy mechanism will not be used. 00:18:00.321 [2024-12-14 12:43:59.823831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.321 [2024-12-14 12:43:59.928984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.580 [2024-12-14 12:44:00.121257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.580 [2024-12-14 12:44:00.121346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.840 BaseBdev1_malloc 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.840 [2024-12-14 12:44:00.520940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:00.840 [2024-12-14 12:44:00.521035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.840 [2024-12-14 12:44:00.521101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:00.840 [2024-12-14 12:44:00.521132] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.840 [2024-12-14 12:44:00.522925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.840 [2024-12-14 12:44:00.522996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:00.840 BaseBdev1 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:00.840 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:00.841 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.841 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.841 BaseBdev2_malloc 00:18:00.841 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.841 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:00.841 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.841 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.841 [2024-12-14 12:44:00.573609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:00.841 [2024-12-14 12:44:00.573712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.841 [2024-12-14 12:44:00.573750] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:00.841 [2024-12-14 12:44:00.573789] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.841 [2024-12-14 12:44:00.575670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.841 [2024-12-14 12:44:00.575745] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:01.101 BaseBdev2 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.101 spare_malloc 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.101 spare_delay 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.101 [2024-12-14 12:44:00.650793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.101 [2024-12-14 12:44:00.650917] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.101 [2024-12-14 12:44:00.650947] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:01.101 [2024-12-14 12:44:00.650960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.101 [2024-12-14 12:44:00.652925] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.101 [2024-12-14 12:44:00.652998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.101 spare 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.101 [2024-12-14 12:44:00.662814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.101 [2024-12-14 12:44:00.664540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.101 [2024-12-14 12:44:00.664774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:01.101 [2024-12-14 12:44:00.664824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:01.101 [2024-12-14 12:44:00.664909] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:01.101 [2024-12-14 12:44:00.665059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:01.101 [2024-12-14 12:44:00.665099] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:01.101 [2024-12-14 12:44:00.665238] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.101 "name": "raid_bdev1", 00:18:01.101 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:01.101 "strip_size_kb": 0, 00:18:01.101 "state": "online", 00:18:01.101 "raid_level": "raid1", 00:18:01.101 "superblock": true, 00:18:01.101 "num_base_bdevs": 2, 00:18:01.101 "num_base_bdevs_discovered": 2, 00:18:01.101 "num_base_bdevs_operational": 2, 00:18:01.101 "base_bdevs_list": [ 00:18:01.101 { 00:18:01.101 "name": "BaseBdev1", 00:18:01.101 "uuid": "f3538a84-1d95-5dfe-bfb6-07c997ec706a", 00:18:01.101 "is_configured": true, 00:18:01.101 "data_offset": 256, 00:18:01.101 "data_size": 7936 00:18:01.101 }, 00:18:01.101 { 00:18:01.101 "name": "BaseBdev2", 00:18:01.101 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:01.101 "is_configured": true, 00:18:01.101 "data_offset": 256, 00:18:01.101 "data_size": 7936 00:18:01.101 } 00:18:01.101 ] 00:18:01.101 }' 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.101 12:44:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.671 [2024-12-14 12:44:01.134352] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.671 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:01.671 [2024-12-14 12:44:01.401679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:01.931 /dev/nbd0 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:01.931 1+0 records in 00:18:01.931 1+0 records out 00:18:01.931 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394277 s, 10.4 MB/s 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:01.931 12:44:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:02.500 7936+0 records in 00:18:02.500 7936+0 records out 00:18:02.500 32505856 bytes (33 MB, 31 MiB) copied, 0.582367 s, 55.8 MB/s 00:18:02.500 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:02.500 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.500 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:02.500 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.500 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:02.500 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.500 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:02.760 [2024-12-14 12:44:02.239587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.760 [2024-12-14 12:44:02.275612] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.760 "name": "raid_bdev1", 00:18:02.760 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:02.760 "strip_size_kb": 0, 00:18:02.760 "state": "online", 00:18:02.760 "raid_level": "raid1", 00:18:02.760 "superblock": true, 00:18:02.760 "num_base_bdevs": 2, 00:18:02.760 "num_base_bdevs_discovered": 1, 00:18:02.760 "num_base_bdevs_operational": 1, 00:18:02.760 "base_bdevs_list": [ 00:18:02.760 { 00:18:02.760 "name": null, 00:18:02.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.760 "is_configured": false, 00:18:02.760 "data_offset": 0, 00:18:02.760 "data_size": 7936 00:18:02.760 }, 00:18:02.760 { 00:18:02.760 "name": "BaseBdev2", 00:18:02.760 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:02.760 "is_configured": true, 00:18:02.760 "data_offset": 256, 00:18:02.760 "data_size": 7936 00:18:02.760 } 00:18:02.760 ] 00:18:02.760 }' 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.760 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.019 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.019 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.019 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.019 [2024-12-14 12:44:02.722863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.019 [2024-12-14 12:44:02.737839] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:03.019 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.019 12:44:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:03.019 [2024-12-14 12:44:02.739670] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.400 "name": "raid_bdev1", 00:18:04.400 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:04.400 "strip_size_kb": 0, 00:18:04.400 "state": "online", 00:18:04.400 "raid_level": "raid1", 00:18:04.400 "superblock": true, 00:18:04.400 "num_base_bdevs": 2, 00:18:04.400 "num_base_bdevs_discovered": 2, 00:18:04.400 "num_base_bdevs_operational": 2, 00:18:04.400 "process": { 00:18:04.400 "type": "rebuild", 00:18:04.400 "target": "spare", 00:18:04.400 "progress": { 00:18:04.400 "blocks": 2560, 00:18:04.400 "percent": 32 00:18:04.400 } 00:18:04.400 }, 00:18:04.400 "base_bdevs_list": [ 00:18:04.400 { 00:18:04.400 "name": "spare", 00:18:04.400 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:04.400 "is_configured": true, 00:18:04.400 "data_offset": 256, 00:18:04.400 "data_size": 7936 00:18:04.400 }, 00:18:04.400 { 00:18:04.400 "name": "BaseBdev2", 00:18:04.400 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:04.400 "is_configured": true, 00:18:04.400 "data_offset": 256, 00:18:04.400 "data_size": 7936 00:18:04.400 } 00:18:04.400 ] 00:18:04.400 }' 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.400 [2024-12-14 12:44:03.903799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.400 [2024-12-14 12:44:03.944748] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:04.400 [2024-12-14 12:44:03.944803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.400 [2024-12-14 12:44:03.944816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.400 [2024-12-14 12:44:03.944828] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.400 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.401 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.401 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.401 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.401 12:44:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.401 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.401 "name": "raid_bdev1", 00:18:04.401 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:04.401 "strip_size_kb": 0, 00:18:04.401 "state": "online", 00:18:04.401 "raid_level": "raid1", 00:18:04.401 "superblock": true, 00:18:04.401 "num_base_bdevs": 2, 00:18:04.401 "num_base_bdevs_discovered": 1, 00:18:04.401 "num_base_bdevs_operational": 1, 00:18:04.401 "base_bdevs_list": [ 00:18:04.401 { 00:18:04.401 "name": null, 00:18:04.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.401 "is_configured": false, 00:18:04.401 "data_offset": 0, 00:18:04.401 "data_size": 7936 00:18:04.401 }, 00:18:04.401 { 00:18:04.401 "name": "BaseBdev2", 00:18:04.401 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:04.401 "is_configured": true, 00:18:04.401 "data_offset": 256, 00:18:04.401 "data_size": 7936 00:18:04.401 } 00:18:04.401 ] 00:18:04.401 }' 00:18:04.401 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.401 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.970 "name": "raid_bdev1", 00:18:04.970 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:04.970 "strip_size_kb": 0, 00:18:04.970 "state": "online", 00:18:04.970 "raid_level": "raid1", 00:18:04.970 "superblock": true, 00:18:04.970 "num_base_bdevs": 2, 00:18:04.970 "num_base_bdevs_discovered": 1, 00:18:04.970 "num_base_bdevs_operational": 1, 00:18:04.970 "base_bdevs_list": [ 00:18:04.970 { 00:18:04.970 "name": null, 00:18:04.970 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.970 "is_configured": false, 00:18:04.970 "data_offset": 0, 00:18:04.970 "data_size": 7936 00:18:04.970 }, 00:18:04.970 { 00:18:04.970 "name": "BaseBdev2", 00:18:04.970 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:04.970 "is_configured": true, 00:18:04.970 "data_offset": 256, 00:18:04.970 "data_size": 7936 00:18:04.970 } 00:18:04.970 ] 00:18:04.970 }' 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.970 [2024-12-14 12:44:04.571882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:04.970 [2024-12-14 12:44:04.585509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.970 12:44:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:04.970 [2024-12-14 12:44:04.587313] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.908 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.168 "name": "raid_bdev1", 00:18:06.168 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:06.168 "strip_size_kb": 0, 00:18:06.168 "state": "online", 00:18:06.168 "raid_level": "raid1", 00:18:06.168 "superblock": true, 00:18:06.168 "num_base_bdevs": 2, 00:18:06.168 "num_base_bdevs_discovered": 2, 00:18:06.168 "num_base_bdevs_operational": 2, 00:18:06.168 "process": { 00:18:06.168 "type": "rebuild", 00:18:06.168 "target": "spare", 00:18:06.168 "progress": { 00:18:06.168 "blocks": 2560, 00:18:06.168 "percent": 32 00:18:06.168 } 00:18:06.168 }, 00:18:06.168 "base_bdevs_list": [ 00:18:06.168 { 00:18:06.168 "name": "spare", 00:18:06.168 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:06.168 "is_configured": true, 00:18:06.168 "data_offset": 256, 00:18:06.168 "data_size": 7936 00:18:06.168 }, 00:18:06.168 { 00:18:06.168 "name": "BaseBdev2", 00:18:06.168 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:06.168 "is_configured": true, 00:18:06.168 "data_offset": 256, 00:18:06.168 "data_size": 7936 00:18:06.168 } 00:18:06.168 ] 00:18:06.168 }' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:06.168 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=700 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.168 "name": "raid_bdev1", 00:18:06.168 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:06.168 "strip_size_kb": 0, 00:18:06.168 "state": "online", 00:18:06.168 "raid_level": "raid1", 00:18:06.168 "superblock": true, 00:18:06.168 "num_base_bdevs": 2, 00:18:06.168 "num_base_bdevs_discovered": 2, 00:18:06.168 "num_base_bdevs_operational": 2, 00:18:06.168 "process": { 00:18:06.168 "type": "rebuild", 00:18:06.168 "target": "spare", 00:18:06.168 "progress": { 00:18:06.168 "blocks": 2816, 00:18:06.168 "percent": 35 00:18:06.168 } 00:18:06.168 }, 00:18:06.168 "base_bdevs_list": [ 00:18:06.168 { 00:18:06.168 "name": "spare", 00:18:06.168 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:06.168 "is_configured": true, 00:18:06.168 "data_offset": 256, 00:18:06.168 "data_size": 7936 00:18:06.168 }, 00:18:06.168 { 00:18:06.168 "name": "BaseBdev2", 00:18:06.168 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:06.168 "is_configured": true, 00:18:06.168 "data_offset": 256, 00:18:06.168 "data_size": 7936 00:18:06.168 } 00:18:06.168 ] 00:18:06.168 }' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.168 12:44:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.550 "name": "raid_bdev1", 00:18:07.550 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:07.550 "strip_size_kb": 0, 00:18:07.550 "state": "online", 00:18:07.550 "raid_level": "raid1", 00:18:07.550 "superblock": true, 00:18:07.550 "num_base_bdevs": 2, 00:18:07.550 "num_base_bdevs_discovered": 2, 00:18:07.550 "num_base_bdevs_operational": 2, 00:18:07.550 "process": { 00:18:07.550 "type": "rebuild", 00:18:07.550 "target": "spare", 00:18:07.550 "progress": { 00:18:07.550 "blocks": 5888, 00:18:07.550 "percent": 74 00:18:07.550 } 00:18:07.550 }, 00:18:07.550 "base_bdevs_list": [ 00:18:07.550 { 00:18:07.550 "name": "spare", 00:18:07.550 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:07.550 "is_configured": true, 00:18:07.550 "data_offset": 256, 00:18:07.550 "data_size": 7936 00:18:07.550 }, 00:18:07.550 { 00:18:07.550 "name": "BaseBdev2", 00:18:07.550 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:07.550 "is_configured": true, 00:18:07.550 "data_offset": 256, 00:18:07.550 "data_size": 7936 00:18:07.550 } 00:18:07.550 ] 00:18:07.550 }' 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.550 12:44:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.550 12:44:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.550 12:44:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:08.120 [2024-12-14 12:44:07.700460] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:08.120 [2024-12-14 12:44:07.700536] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:08.120 [2024-12-14 12:44:07.700663] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.380 "name": "raid_bdev1", 00:18:08.380 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:08.380 "strip_size_kb": 0, 00:18:08.380 "state": "online", 00:18:08.380 "raid_level": "raid1", 00:18:08.380 "superblock": true, 00:18:08.380 "num_base_bdevs": 2, 00:18:08.380 "num_base_bdevs_discovered": 2, 00:18:08.380 "num_base_bdevs_operational": 2, 00:18:08.380 "base_bdevs_list": [ 00:18:08.380 { 00:18:08.380 "name": "spare", 00:18:08.380 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:08.380 "is_configured": true, 00:18:08.380 "data_offset": 256, 00:18:08.380 "data_size": 7936 00:18:08.380 }, 00:18:08.380 { 00:18:08.380 "name": "BaseBdev2", 00:18:08.380 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:08.380 "is_configured": true, 00:18:08.380 "data_offset": 256, 00:18:08.380 "data_size": 7936 00:18:08.380 } 00:18:08.380 ] 00:18:08.380 }' 00:18:08.380 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.640 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:08.640 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.640 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:08.640 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:08.640 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.640 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.640 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.640 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.641 "name": "raid_bdev1", 00:18:08.641 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:08.641 "strip_size_kb": 0, 00:18:08.641 "state": "online", 00:18:08.641 "raid_level": "raid1", 00:18:08.641 "superblock": true, 00:18:08.641 "num_base_bdevs": 2, 00:18:08.641 "num_base_bdevs_discovered": 2, 00:18:08.641 "num_base_bdevs_operational": 2, 00:18:08.641 "base_bdevs_list": [ 00:18:08.641 { 00:18:08.641 "name": "spare", 00:18:08.641 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:08.641 "is_configured": true, 00:18:08.641 "data_offset": 256, 00:18:08.641 "data_size": 7936 00:18:08.641 }, 00:18:08.641 { 00:18:08.641 "name": "BaseBdev2", 00:18:08.641 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:08.641 "is_configured": true, 00:18:08.641 "data_offset": 256, 00:18:08.641 "data_size": 7936 00:18:08.641 } 00:18:08.641 ] 00:18:08.641 }' 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.641 "name": "raid_bdev1", 00:18:08.641 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:08.641 "strip_size_kb": 0, 00:18:08.641 "state": "online", 00:18:08.641 "raid_level": "raid1", 00:18:08.641 "superblock": true, 00:18:08.641 "num_base_bdevs": 2, 00:18:08.641 "num_base_bdevs_discovered": 2, 00:18:08.641 "num_base_bdevs_operational": 2, 00:18:08.641 "base_bdevs_list": [ 00:18:08.641 { 00:18:08.641 "name": "spare", 00:18:08.641 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:08.641 "is_configured": true, 00:18:08.641 "data_offset": 256, 00:18:08.641 "data_size": 7936 00:18:08.641 }, 00:18:08.641 { 00:18:08.641 "name": "BaseBdev2", 00:18:08.641 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:08.641 "is_configured": true, 00:18:08.641 "data_offset": 256, 00:18:08.641 "data_size": 7936 00:18:08.641 } 00:18:08.641 ] 00:18:08.641 }' 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.641 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.210 [2024-12-14 12:44:08.771453] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.210 [2024-12-14 12:44:08.771535] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.210 [2024-12-14 12:44:08.771639] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.210 [2024-12-14 12:44:08.771738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.210 [2024-12-14 12:44:08.771785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.210 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:09.211 12:44:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:09.471 /dev/nbd0 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.471 1+0 records in 00:18:09.471 1+0 records out 00:18:09.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280797 s, 14.6 MB/s 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:09.471 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:09.730 /dev/nbd1 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:09.730 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.731 1+0 records in 00:18:09.731 1+0 records out 00:18:09.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000228604 s, 17.9 MB/s 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:09.731 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:09.990 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:09.990 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.990 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:09.990 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:09.990 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:09.990 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:09.990 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.251 [2024-12-14 12:44:09.978971] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.251 [2024-12-14 12:44:09.979055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.251 [2024-12-14 12:44:09.979084] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:10.251 [2024-12-14 12:44:09.979093] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.251 [2024-12-14 12:44:09.981037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.251 [2024-12-14 12:44:09.981081] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.251 [2024-12-14 12:44:09.981160] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:10.251 [2024-12-14 12:44:09.981217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.251 [2024-12-14 12:44:09.981360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.251 spare 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.251 12:44:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.511 [2024-12-14 12:44:10.081259] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:10.511 [2024-12-14 12:44:10.081300] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:10.511 [2024-12-14 12:44:10.081409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:10.511 [2024-12-14 12:44:10.081574] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:10.511 [2024-12-14 12:44:10.081585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:10.511 [2024-12-14 12:44:10.081720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.511 "name": "raid_bdev1", 00:18:10.511 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:10.511 "strip_size_kb": 0, 00:18:10.511 "state": "online", 00:18:10.511 "raid_level": "raid1", 00:18:10.511 "superblock": true, 00:18:10.511 "num_base_bdevs": 2, 00:18:10.511 "num_base_bdevs_discovered": 2, 00:18:10.511 "num_base_bdevs_operational": 2, 00:18:10.511 "base_bdevs_list": [ 00:18:10.511 { 00:18:10.511 "name": "spare", 00:18:10.511 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:10.511 "is_configured": true, 00:18:10.511 "data_offset": 256, 00:18:10.511 "data_size": 7936 00:18:10.511 }, 00:18:10.511 { 00:18:10.511 "name": "BaseBdev2", 00:18:10.511 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:10.511 "is_configured": true, 00:18:10.511 "data_offset": 256, 00:18:10.511 "data_size": 7936 00:18:10.511 } 00:18:10.511 ] 00:18:10.511 }' 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.511 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.081 "name": "raid_bdev1", 00:18:11.081 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:11.081 "strip_size_kb": 0, 00:18:11.081 "state": "online", 00:18:11.081 "raid_level": "raid1", 00:18:11.081 "superblock": true, 00:18:11.081 "num_base_bdevs": 2, 00:18:11.081 "num_base_bdevs_discovered": 2, 00:18:11.081 "num_base_bdevs_operational": 2, 00:18:11.081 "base_bdevs_list": [ 00:18:11.081 { 00:18:11.081 "name": "spare", 00:18:11.081 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:11.081 "is_configured": true, 00:18:11.081 "data_offset": 256, 00:18:11.081 "data_size": 7936 00:18:11.081 }, 00:18:11.081 { 00:18:11.081 "name": "BaseBdev2", 00:18:11.081 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:11.081 "is_configured": true, 00:18:11.081 "data_offset": 256, 00:18:11.081 "data_size": 7936 00:18:11.081 } 00:18:11.081 ] 00:18:11.081 }' 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 [2024-12-14 12:44:10.717933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.081 "name": "raid_bdev1", 00:18:11.081 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:11.081 "strip_size_kb": 0, 00:18:11.081 "state": "online", 00:18:11.081 "raid_level": "raid1", 00:18:11.081 "superblock": true, 00:18:11.081 "num_base_bdevs": 2, 00:18:11.081 "num_base_bdevs_discovered": 1, 00:18:11.081 "num_base_bdevs_operational": 1, 00:18:11.081 "base_bdevs_list": [ 00:18:11.081 { 00:18:11.081 "name": null, 00:18:11.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.081 "is_configured": false, 00:18:11.081 "data_offset": 0, 00:18:11.081 "data_size": 7936 00:18:11.081 }, 00:18:11.081 { 00:18:11.081 "name": "BaseBdev2", 00:18:11.081 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:11.081 "is_configured": true, 00:18:11.081 "data_offset": 256, 00:18:11.081 "data_size": 7936 00:18:11.081 } 00:18:11.081 ] 00:18:11.081 }' 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.081 12:44:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.650 12:44:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.650 12:44:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.650 12:44:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.650 [2024-12-14 12:44:11.145220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.650 [2024-12-14 12:44:11.145417] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:11.650 [2024-12-14 12:44:11.145433] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:11.650 [2024-12-14 12:44:11.145469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.650 [2024-12-14 12:44:11.159210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:11.650 12:44:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.650 12:44:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:11.650 [2024-12-14 12:44:11.160999] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.590 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.591 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.591 "name": "raid_bdev1", 00:18:12.591 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:12.591 "strip_size_kb": 0, 00:18:12.591 "state": "online", 00:18:12.591 "raid_level": "raid1", 00:18:12.591 "superblock": true, 00:18:12.591 "num_base_bdevs": 2, 00:18:12.591 "num_base_bdevs_discovered": 2, 00:18:12.591 "num_base_bdevs_operational": 2, 00:18:12.591 "process": { 00:18:12.591 "type": "rebuild", 00:18:12.591 "target": "spare", 00:18:12.591 "progress": { 00:18:12.591 "blocks": 2560, 00:18:12.591 "percent": 32 00:18:12.591 } 00:18:12.591 }, 00:18:12.591 "base_bdevs_list": [ 00:18:12.591 { 00:18:12.591 "name": "spare", 00:18:12.591 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:12.591 "is_configured": true, 00:18:12.591 "data_offset": 256, 00:18:12.591 "data_size": 7936 00:18:12.591 }, 00:18:12.591 { 00:18:12.591 "name": "BaseBdev2", 00:18:12.591 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:12.591 "is_configured": true, 00:18:12.591 "data_offset": 256, 00:18:12.591 "data_size": 7936 00:18:12.591 } 00:18:12.591 ] 00:18:12.591 }' 00:18:12.591 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.591 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.591 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.591 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.591 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:12.591 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.591 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.591 [2024-12-14 12:44:12.309201] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.851 [2024-12-14 12:44:12.366221] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:12.851 [2024-12-14 12:44:12.366290] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.851 [2024-12-14 12:44:12.366305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.851 [2024-12-14 12:44:12.366327] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.851 "name": "raid_bdev1", 00:18:12.851 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:12.851 "strip_size_kb": 0, 00:18:12.851 "state": "online", 00:18:12.851 "raid_level": "raid1", 00:18:12.851 "superblock": true, 00:18:12.851 "num_base_bdevs": 2, 00:18:12.851 "num_base_bdevs_discovered": 1, 00:18:12.851 "num_base_bdevs_operational": 1, 00:18:12.851 "base_bdevs_list": [ 00:18:12.851 { 00:18:12.851 "name": null, 00:18:12.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.851 "is_configured": false, 00:18:12.851 "data_offset": 0, 00:18:12.851 "data_size": 7936 00:18:12.851 }, 00:18:12.851 { 00:18:12.851 "name": "BaseBdev2", 00:18:12.851 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:12.851 "is_configured": true, 00:18:12.851 "data_offset": 256, 00:18:12.851 "data_size": 7936 00:18:12.851 } 00:18:12.851 ] 00:18:12.851 }' 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.851 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.111 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:13.111 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.111 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.111 [2024-12-14 12:44:12.834922] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.111 [2024-12-14 12:44:12.835061] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.111 [2024-12-14 12:44:12.835108] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:13.111 [2024-12-14 12:44:12.835168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.111 [2024-12-14 12:44:12.835469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.111 [2024-12-14 12:44:12.835532] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.111 [2024-12-14 12:44:12.835628] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:13.111 [2024-12-14 12:44:12.835671] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:13.111 [2024-12-14 12:44:12.835714] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:13.111 [2024-12-14 12:44:12.835768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.371 [2024-12-14 12:44:12.850275] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:13.371 spare 00:18:13.371 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.371 12:44:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:13.371 [2024-12-14 12:44:12.852332] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.311 "name": "raid_bdev1", 00:18:14.311 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:14.311 "strip_size_kb": 0, 00:18:14.311 "state": "online", 00:18:14.311 "raid_level": "raid1", 00:18:14.311 "superblock": true, 00:18:14.311 "num_base_bdevs": 2, 00:18:14.311 "num_base_bdevs_discovered": 2, 00:18:14.311 "num_base_bdevs_operational": 2, 00:18:14.311 "process": { 00:18:14.311 "type": "rebuild", 00:18:14.311 "target": "spare", 00:18:14.311 "progress": { 00:18:14.311 "blocks": 2560, 00:18:14.311 "percent": 32 00:18:14.311 } 00:18:14.311 }, 00:18:14.311 "base_bdevs_list": [ 00:18:14.311 { 00:18:14.311 "name": "spare", 00:18:14.311 "uuid": "4cebd6d8-3720-5a35-9183-fc10bc8464fc", 00:18:14.311 "is_configured": true, 00:18:14.311 "data_offset": 256, 00:18:14.311 "data_size": 7936 00:18:14.311 }, 00:18:14.311 { 00:18:14.311 "name": "BaseBdev2", 00:18:14.311 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:14.311 "is_configured": true, 00:18:14.311 "data_offset": 256, 00:18:14.311 "data_size": 7936 00:18:14.311 } 00:18:14.311 ] 00:18:14.311 }' 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.311 12:44:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.311 [2024-12-14 12:44:14.004394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.584 [2024-12-14 12:44:14.058030] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:14.584 [2024-12-14 12:44:14.058172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.584 [2024-12-14 12:44:14.058216] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.585 [2024-12-14 12:44:14.058239] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.585 "name": "raid_bdev1", 00:18:14.585 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:14.585 "strip_size_kb": 0, 00:18:14.585 "state": "online", 00:18:14.585 "raid_level": "raid1", 00:18:14.585 "superblock": true, 00:18:14.585 "num_base_bdevs": 2, 00:18:14.585 "num_base_bdevs_discovered": 1, 00:18:14.585 "num_base_bdevs_operational": 1, 00:18:14.585 "base_bdevs_list": [ 00:18:14.585 { 00:18:14.585 "name": null, 00:18:14.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.585 "is_configured": false, 00:18:14.585 "data_offset": 0, 00:18:14.585 "data_size": 7936 00:18:14.585 }, 00:18:14.585 { 00:18:14.585 "name": "BaseBdev2", 00:18:14.585 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:14.585 "is_configured": true, 00:18:14.585 "data_offset": 256, 00:18:14.585 "data_size": 7936 00:18:14.585 } 00:18:14.585 ] 00:18:14.585 }' 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.585 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.882 "name": "raid_bdev1", 00:18:14.882 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:14.882 "strip_size_kb": 0, 00:18:14.882 "state": "online", 00:18:14.882 "raid_level": "raid1", 00:18:14.882 "superblock": true, 00:18:14.882 "num_base_bdevs": 2, 00:18:14.882 "num_base_bdevs_discovered": 1, 00:18:14.882 "num_base_bdevs_operational": 1, 00:18:14.882 "base_bdevs_list": [ 00:18:14.882 { 00:18:14.882 "name": null, 00:18:14.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.882 "is_configured": false, 00:18:14.882 "data_offset": 0, 00:18:14.882 "data_size": 7936 00:18:14.882 }, 00:18:14.882 { 00:18:14.882 "name": "BaseBdev2", 00:18:14.882 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:14.882 "is_configured": true, 00:18:14.882 "data_offset": 256, 00:18:14.882 "data_size": 7936 00:18:14.882 } 00:18:14.882 ] 00:18:14.882 }' 00:18:14.882 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.165 [2024-12-14 12:44:14.693643] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:15.165 [2024-12-14 12:44:14.693744] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.165 [2024-12-14 12:44:14.693774] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:15.165 [2024-12-14 12:44:14.693783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.165 [2024-12-14 12:44:14.694016] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.165 [2024-12-14 12:44:14.694028] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:15.165 [2024-12-14 12:44:14.694093] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:15.165 [2024-12-14 12:44:14.694107] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.165 [2024-12-14 12:44:14.694120] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:15.165 [2024-12-14 12:44:14.694129] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:15.165 BaseBdev1 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.165 12:44:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.103 "name": "raid_bdev1", 00:18:16.103 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:16.103 "strip_size_kb": 0, 00:18:16.103 "state": "online", 00:18:16.103 "raid_level": "raid1", 00:18:16.103 "superblock": true, 00:18:16.103 "num_base_bdevs": 2, 00:18:16.103 "num_base_bdevs_discovered": 1, 00:18:16.103 "num_base_bdevs_operational": 1, 00:18:16.103 "base_bdevs_list": [ 00:18:16.103 { 00:18:16.103 "name": null, 00:18:16.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.103 "is_configured": false, 00:18:16.103 "data_offset": 0, 00:18:16.103 "data_size": 7936 00:18:16.103 }, 00:18:16.103 { 00:18:16.103 "name": "BaseBdev2", 00:18:16.103 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:16.103 "is_configured": true, 00:18:16.103 "data_offset": 256, 00:18:16.103 "data_size": 7936 00:18:16.103 } 00:18:16.103 ] 00:18:16.103 }' 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.103 12:44:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.672 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.672 "name": "raid_bdev1", 00:18:16.672 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:16.672 "strip_size_kb": 0, 00:18:16.672 "state": "online", 00:18:16.672 "raid_level": "raid1", 00:18:16.672 "superblock": true, 00:18:16.672 "num_base_bdevs": 2, 00:18:16.672 "num_base_bdevs_discovered": 1, 00:18:16.672 "num_base_bdevs_operational": 1, 00:18:16.672 "base_bdevs_list": [ 00:18:16.672 { 00:18:16.672 "name": null, 00:18:16.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.673 "is_configured": false, 00:18:16.673 "data_offset": 0, 00:18:16.673 "data_size": 7936 00:18:16.673 }, 00:18:16.673 { 00:18:16.673 "name": "BaseBdev2", 00:18:16.673 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:16.673 "is_configured": true, 00:18:16.673 "data_offset": 256, 00:18:16.673 "data_size": 7936 00:18:16.673 } 00:18:16.673 ] 00:18:16.673 }' 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.673 [2024-12-14 12:44:16.299008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.673 [2024-12-14 12:44:16.299240] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:16.673 [2024-12-14 12:44:16.299304] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:16.673 request: 00:18:16.673 { 00:18:16.673 "base_bdev": "BaseBdev1", 00:18:16.673 "raid_bdev": "raid_bdev1", 00:18:16.673 "method": "bdev_raid_add_base_bdev", 00:18:16.673 "req_id": 1 00:18:16.673 } 00:18:16.673 Got JSON-RPC error response 00:18:16.673 response: 00:18:16.673 { 00:18:16.673 "code": -22, 00:18:16.673 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:16.673 } 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.673 12:44:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.612 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.871 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.871 "name": "raid_bdev1", 00:18:17.871 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:17.871 "strip_size_kb": 0, 00:18:17.871 "state": "online", 00:18:17.871 "raid_level": "raid1", 00:18:17.871 "superblock": true, 00:18:17.871 "num_base_bdevs": 2, 00:18:17.871 "num_base_bdevs_discovered": 1, 00:18:17.871 "num_base_bdevs_operational": 1, 00:18:17.871 "base_bdevs_list": [ 00:18:17.871 { 00:18:17.871 "name": null, 00:18:17.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.871 "is_configured": false, 00:18:17.871 "data_offset": 0, 00:18:17.871 "data_size": 7936 00:18:17.871 }, 00:18:17.871 { 00:18:17.871 "name": "BaseBdev2", 00:18:17.871 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:17.871 "is_configured": true, 00:18:17.871 "data_offset": 256, 00:18:17.871 "data_size": 7936 00:18:17.871 } 00:18:17.871 ] 00:18:17.871 }' 00:18:17.871 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.871 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.131 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.132 "name": "raid_bdev1", 00:18:18.132 "uuid": "e46b19cf-47e3-46c3-ab85-b4e785486127", 00:18:18.132 "strip_size_kb": 0, 00:18:18.132 "state": "online", 00:18:18.132 "raid_level": "raid1", 00:18:18.132 "superblock": true, 00:18:18.132 "num_base_bdevs": 2, 00:18:18.132 "num_base_bdevs_discovered": 1, 00:18:18.132 "num_base_bdevs_operational": 1, 00:18:18.132 "base_bdevs_list": [ 00:18:18.132 { 00:18:18.132 "name": null, 00:18:18.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.132 "is_configured": false, 00:18:18.132 "data_offset": 0, 00:18:18.132 "data_size": 7936 00:18:18.132 }, 00:18:18.132 { 00:18:18.132 "name": "BaseBdev2", 00:18:18.132 "uuid": "1fefed8a-e1ac-5a99-aca0-dd4ae47b78c1", 00:18:18.132 "is_configured": true, 00:18:18.132 "data_offset": 256, 00:18:18.132 "data_size": 7936 00:18:18.132 } 00:18:18.132 ] 00:18:18.132 }' 00:18:18.132 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.132 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.132 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.391 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 89507 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 89507 ']' 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 89507 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89507 00:18:18.392 killing process with pid 89507 00:18:18.392 Received shutdown signal, test time was about 60.000000 seconds 00:18:18.392 00:18:18.392 Latency(us) 00:18:18.392 [2024-12-14T12:44:18.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.392 [2024-12-14T12:44:18.130Z] =================================================================================================================== 00:18:18.392 [2024-12-14T12:44:18.130Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89507' 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 89507 00:18:18.392 [2024-12-14 12:44:17.914870] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.392 12:44:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 89507 00:18:18.392 [2024-12-14 12:44:17.914992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.392 [2024-12-14 12:44:17.915053] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.392 [2024-12-14 12:44:17.915065] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:18.651 [2024-12-14 12:44:18.230199] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.590 12:44:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:19.590 00:18:19.590 real 0m19.744s 00:18:19.590 user 0m25.902s 00:18:19.590 sys 0m2.447s 00:18:19.590 ************************************ 00:18:19.590 END TEST raid_rebuild_test_sb_md_separate 00:18:19.590 ************************************ 00:18:19.590 12:44:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.590 12:44:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 12:44:19 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:19.851 12:44:19 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:19.851 12:44:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:19.851 12:44:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.851 12:44:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 ************************************ 00:18:19.851 START TEST raid_state_function_test_sb_md_interleaved 00:18:19.851 ************************************ 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:19.851 Process raid pid: 90194 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=90194 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 90194' 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 90194 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90194 ']' 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.851 12:44:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 [2024-12-14 12:44:19.470905] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:19.851 [2024-12-14 12:44:19.471110] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.111 [2024-12-14 12:44:19.635264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.111 [2024-12-14 12:44:19.745118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.458 [2024-12-14 12:44:19.936502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.458 [2024-12-14 12:44:19.936616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.729 [2024-12-14 12:44:20.310058] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.729 [2024-12-14 12:44:20.310162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.729 [2024-12-14 12:44:20.310192] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.729 [2024-12-14 12:44:20.310215] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.729 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.729 "name": "Existed_Raid", 00:18:20.729 "uuid": "3c1d23e6-ffdd-403d-934b-915c97eb0154", 00:18:20.729 "strip_size_kb": 0, 00:18:20.729 "state": "configuring", 00:18:20.729 "raid_level": "raid1", 00:18:20.729 "superblock": true, 00:18:20.729 "num_base_bdevs": 2, 00:18:20.729 "num_base_bdevs_discovered": 0, 00:18:20.729 "num_base_bdevs_operational": 2, 00:18:20.729 "base_bdevs_list": [ 00:18:20.729 { 00:18:20.729 "name": "BaseBdev1", 00:18:20.729 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.730 "is_configured": false, 00:18:20.730 "data_offset": 0, 00:18:20.730 "data_size": 0 00:18:20.730 }, 00:18:20.730 { 00:18:20.730 "name": "BaseBdev2", 00:18:20.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.730 "is_configured": false, 00:18:20.730 "data_offset": 0, 00:18:20.730 "data_size": 0 00:18:20.730 } 00:18:20.730 ] 00:18:20.730 }' 00:18:20.730 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.730 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.299 [2024-12-14 12:44:20.737257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.299 [2024-12-14 12:44:20.737346] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.299 [2024-12-14 12:44:20.749229] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.299 [2024-12-14 12:44:20.749270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.299 [2024-12-14 12:44:20.749279] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.299 [2024-12-14 12:44:20.749289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.299 [2024-12-14 12:44:20.795229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.299 BaseBdev1 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.299 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.299 [ 00:18:21.299 { 00:18:21.299 "name": "BaseBdev1", 00:18:21.299 "aliases": [ 00:18:21.299 "9c181152-40ac-4b0c-92df-a4d47b7aff4e" 00:18:21.299 ], 00:18:21.299 "product_name": "Malloc disk", 00:18:21.299 "block_size": 4128, 00:18:21.299 "num_blocks": 8192, 00:18:21.299 "uuid": "9c181152-40ac-4b0c-92df-a4d47b7aff4e", 00:18:21.299 "md_size": 32, 00:18:21.299 "md_interleave": true, 00:18:21.299 "dif_type": 0, 00:18:21.299 "assigned_rate_limits": { 00:18:21.299 "rw_ios_per_sec": 0, 00:18:21.299 "rw_mbytes_per_sec": 0, 00:18:21.299 "r_mbytes_per_sec": 0, 00:18:21.299 "w_mbytes_per_sec": 0 00:18:21.299 }, 00:18:21.299 "claimed": true, 00:18:21.299 "claim_type": "exclusive_write", 00:18:21.299 "zoned": false, 00:18:21.299 "supported_io_types": { 00:18:21.299 "read": true, 00:18:21.299 "write": true, 00:18:21.299 "unmap": true, 00:18:21.299 "flush": true, 00:18:21.299 "reset": true, 00:18:21.299 "nvme_admin": false, 00:18:21.299 "nvme_io": false, 00:18:21.299 "nvme_io_md": false, 00:18:21.299 "write_zeroes": true, 00:18:21.299 "zcopy": true, 00:18:21.299 "get_zone_info": false, 00:18:21.299 "zone_management": false, 00:18:21.299 "zone_append": false, 00:18:21.299 "compare": false, 00:18:21.299 "compare_and_write": false, 00:18:21.299 "abort": true, 00:18:21.299 "seek_hole": false, 00:18:21.300 "seek_data": false, 00:18:21.300 "copy": true, 00:18:21.300 "nvme_iov_md": false 00:18:21.300 }, 00:18:21.300 "memory_domains": [ 00:18:21.300 { 00:18:21.300 "dma_device_id": "system", 00:18:21.300 "dma_device_type": 1 00:18:21.300 }, 00:18:21.300 { 00:18:21.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.300 "dma_device_type": 2 00:18:21.300 } 00:18:21.300 ], 00:18:21.300 "driver_specific": {} 00:18:21.300 } 00:18:21.300 ] 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.300 "name": "Existed_Raid", 00:18:21.300 "uuid": "37ffa92e-de8e-46a0-9732-4d09dd3d7ca9", 00:18:21.300 "strip_size_kb": 0, 00:18:21.300 "state": "configuring", 00:18:21.300 "raid_level": "raid1", 00:18:21.300 "superblock": true, 00:18:21.300 "num_base_bdevs": 2, 00:18:21.300 "num_base_bdevs_discovered": 1, 00:18:21.300 "num_base_bdevs_operational": 2, 00:18:21.300 "base_bdevs_list": [ 00:18:21.300 { 00:18:21.300 "name": "BaseBdev1", 00:18:21.300 "uuid": "9c181152-40ac-4b0c-92df-a4d47b7aff4e", 00:18:21.300 "is_configured": true, 00:18:21.300 "data_offset": 256, 00:18:21.300 "data_size": 7936 00:18:21.300 }, 00:18:21.300 { 00:18:21.300 "name": "BaseBdev2", 00:18:21.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.300 "is_configured": false, 00:18:21.300 "data_offset": 0, 00:18:21.300 "data_size": 0 00:18:21.300 } 00:18:21.300 ] 00:18:21.300 }' 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.300 12:44:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.870 [2024-12-14 12:44:21.302475] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.870 [2024-12-14 12:44:21.302530] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.870 [2024-12-14 12:44:21.310504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.870 [2024-12-14 12:44:21.312225] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.870 [2024-12-14 12:44:21.312261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.870 "name": "Existed_Raid", 00:18:21.870 "uuid": "26f7aa13-9fae-439e-abd5-c52c5ac5cb27", 00:18:21.870 "strip_size_kb": 0, 00:18:21.870 "state": "configuring", 00:18:21.870 "raid_level": "raid1", 00:18:21.870 "superblock": true, 00:18:21.870 "num_base_bdevs": 2, 00:18:21.870 "num_base_bdevs_discovered": 1, 00:18:21.870 "num_base_bdevs_operational": 2, 00:18:21.870 "base_bdevs_list": [ 00:18:21.870 { 00:18:21.870 "name": "BaseBdev1", 00:18:21.870 "uuid": "9c181152-40ac-4b0c-92df-a4d47b7aff4e", 00:18:21.870 "is_configured": true, 00:18:21.870 "data_offset": 256, 00:18:21.870 "data_size": 7936 00:18:21.870 }, 00:18:21.870 { 00:18:21.870 "name": "BaseBdev2", 00:18:21.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.870 "is_configured": false, 00:18:21.870 "data_offset": 0, 00:18:21.870 "data_size": 0 00:18:21.870 } 00:18:21.870 ] 00:18:21.870 }' 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.870 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.131 [2024-12-14 12:44:21.805168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.131 [2024-12-14 12:44:21.805476] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:22.131 [2024-12-14 12:44:21.805495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:22.131 [2024-12-14 12:44:21.805578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:22.131 [2024-12-14 12:44:21.805653] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:22.131 [2024-12-14 12:44:21.805664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:22.131 [2024-12-14 12:44:21.805724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.131 BaseBdev2 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.131 [ 00:18:22.131 { 00:18:22.131 "name": "BaseBdev2", 00:18:22.131 "aliases": [ 00:18:22.131 "6492171c-6da9-4f8f-a50d-0c446e586bef" 00:18:22.131 ], 00:18:22.131 "product_name": "Malloc disk", 00:18:22.131 "block_size": 4128, 00:18:22.131 "num_blocks": 8192, 00:18:22.131 "uuid": "6492171c-6da9-4f8f-a50d-0c446e586bef", 00:18:22.131 "md_size": 32, 00:18:22.131 "md_interleave": true, 00:18:22.131 "dif_type": 0, 00:18:22.131 "assigned_rate_limits": { 00:18:22.131 "rw_ios_per_sec": 0, 00:18:22.131 "rw_mbytes_per_sec": 0, 00:18:22.131 "r_mbytes_per_sec": 0, 00:18:22.131 "w_mbytes_per_sec": 0 00:18:22.131 }, 00:18:22.131 "claimed": true, 00:18:22.131 "claim_type": "exclusive_write", 00:18:22.131 "zoned": false, 00:18:22.131 "supported_io_types": { 00:18:22.131 "read": true, 00:18:22.131 "write": true, 00:18:22.131 "unmap": true, 00:18:22.131 "flush": true, 00:18:22.131 "reset": true, 00:18:22.131 "nvme_admin": false, 00:18:22.131 "nvme_io": false, 00:18:22.131 "nvme_io_md": false, 00:18:22.131 "write_zeroes": true, 00:18:22.131 "zcopy": true, 00:18:22.131 "get_zone_info": false, 00:18:22.131 "zone_management": false, 00:18:22.131 "zone_append": false, 00:18:22.131 "compare": false, 00:18:22.131 "compare_and_write": false, 00:18:22.131 "abort": true, 00:18:22.131 "seek_hole": false, 00:18:22.131 "seek_data": false, 00:18:22.131 "copy": true, 00:18:22.131 "nvme_iov_md": false 00:18:22.131 }, 00:18:22.131 "memory_domains": [ 00:18:22.131 { 00:18:22.131 "dma_device_id": "system", 00:18:22.131 "dma_device_type": 1 00:18:22.131 }, 00:18:22.131 { 00:18:22.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.131 "dma_device_type": 2 00:18:22.131 } 00:18:22.131 ], 00:18:22.131 "driver_specific": {} 00:18:22.131 } 00:18:22.131 ] 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.131 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.391 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.391 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.391 "name": "Existed_Raid", 00:18:22.391 "uuid": "26f7aa13-9fae-439e-abd5-c52c5ac5cb27", 00:18:22.391 "strip_size_kb": 0, 00:18:22.391 "state": "online", 00:18:22.391 "raid_level": "raid1", 00:18:22.391 "superblock": true, 00:18:22.391 "num_base_bdevs": 2, 00:18:22.391 "num_base_bdevs_discovered": 2, 00:18:22.391 "num_base_bdevs_operational": 2, 00:18:22.391 "base_bdevs_list": [ 00:18:22.391 { 00:18:22.391 "name": "BaseBdev1", 00:18:22.391 "uuid": "9c181152-40ac-4b0c-92df-a4d47b7aff4e", 00:18:22.391 "is_configured": true, 00:18:22.391 "data_offset": 256, 00:18:22.391 "data_size": 7936 00:18:22.391 }, 00:18:22.391 { 00:18:22.391 "name": "BaseBdev2", 00:18:22.391 "uuid": "6492171c-6da9-4f8f-a50d-0c446e586bef", 00:18:22.391 "is_configured": true, 00:18:22.391 "data_offset": 256, 00:18:22.391 "data_size": 7936 00:18:22.391 } 00:18:22.391 ] 00:18:22.391 }' 00:18:22.391 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.391 12:44:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:22.651 [2024-12-14 12:44:22.308687] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.651 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.651 "name": "Existed_Raid", 00:18:22.651 "aliases": [ 00:18:22.651 "26f7aa13-9fae-439e-abd5-c52c5ac5cb27" 00:18:22.651 ], 00:18:22.651 "product_name": "Raid Volume", 00:18:22.651 "block_size": 4128, 00:18:22.651 "num_blocks": 7936, 00:18:22.651 "uuid": "26f7aa13-9fae-439e-abd5-c52c5ac5cb27", 00:18:22.651 "md_size": 32, 00:18:22.651 "md_interleave": true, 00:18:22.651 "dif_type": 0, 00:18:22.651 "assigned_rate_limits": { 00:18:22.651 "rw_ios_per_sec": 0, 00:18:22.651 "rw_mbytes_per_sec": 0, 00:18:22.651 "r_mbytes_per_sec": 0, 00:18:22.651 "w_mbytes_per_sec": 0 00:18:22.651 }, 00:18:22.651 "claimed": false, 00:18:22.651 "zoned": false, 00:18:22.651 "supported_io_types": { 00:18:22.651 "read": true, 00:18:22.651 "write": true, 00:18:22.651 "unmap": false, 00:18:22.651 "flush": false, 00:18:22.651 "reset": true, 00:18:22.651 "nvme_admin": false, 00:18:22.651 "nvme_io": false, 00:18:22.651 "nvme_io_md": false, 00:18:22.651 "write_zeroes": true, 00:18:22.651 "zcopy": false, 00:18:22.651 "get_zone_info": false, 00:18:22.651 "zone_management": false, 00:18:22.651 "zone_append": false, 00:18:22.651 "compare": false, 00:18:22.651 "compare_and_write": false, 00:18:22.651 "abort": false, 00:18:22.651 "seek_hole": false, 00:18:22.651 "seek_data": false, 00:18:22.651 "copy": false, 00:18:22.651 "nvme_iov_md": false 00:18:22.651 }, 00:18:22.651 "memory_domains": [ 00:18:22.651 { 00:18:22.651 "dma_device_id": "system", 00:18:22.651 "dma_device_type": 1 00:18:22.651 }, 00:18:22.651 { 00:18:22.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.651 "dma_device_type": 2 00:18:22.651 }, 00:18:22.651 { 00:18:22.651 "dma_device_id": "system", 00:18:22.651 "dma_device_type": 1 00:18:22.651 }, 00:18:22.651 { 00:18:22.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.651 "dma_device_type": 2 00:18:22.651 } 00:18:22.651 ], 00:18:22.651 "driver_specific": { 00:18:22.651 "raid": { 00:18:22.651 "uuid": "26f7aa13-9fae-439e-abd5-c52c5ac5cb27", 00:18:22.651 "strip_size_kb": 0, 00:18:22.651 "state": "online", 00:18:22.651 "raid_level": "raid1", 00:18:22.651 "superblock": true, 00:18:22.651 "num_base_bdevs": 2, 00:18:22.651 "num_base_bdevs_discovered": 2, 00:18:22.651 "num_base_bdevs_operational": 2, 00:18:22.651 "base_bdevs_list": [ 00:18:22.651 { 00:18:22.651 "name": "BaseBdev1", 00:18:22.651 "uuid": "9c181152-40ac-4b0c-92df-a4d47b7aff4e", 00:18:22.651 "is_configured": true, 00:18:22.651 "data_offset": 256, 00:18:22.651 "data_size": 7936 00:18:22.651 }, 00:18:22.651 { 00:18:22.651 "name": "BaseBdev2", 00:18:22.651 "uuid": "6492171c-6da9-4f8f-a50d-0c446e586bef", 00:18:22.652 "is_configured": true, 00:18:22.652 "data_offset": 256, 00:18:22.652 "data_size": 7936 00:18:22.652 } 00:18:22.652 ] 00:18:22.652 } 00:18:22.652 } 00:18:22.652 }' 00:18:22.652 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:22.652 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:22.652 BaseBdev2' 00:18:22.652 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.912 [2024-12-14 12:44:22.532094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.912 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.172 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.172 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.172 "name": "Existed_Raid", 00:18:23.172 "uuid": "26f7aa13-9fae-439e-abd5-c52c5ac5cb27", 00:18:23.172 "strip_size_kb": 0, 00:18:23.172 "state": "online", 00:18:23.172 "raid_level": "raid1", 00:18:23.172 "superblock": true, 00:18:23.172 "num_base_bdevs": 2, 00:18:23.172 "num_base_bdevs_discovered": 1, 00:18:23.172 "num_base_bdevs_operational": 1, 00:18:23.172 "base_bdevs_list": [ 00:18:23.172 { 00:18:23.172 "name": null, 00:18:23.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.172 "is_configured": false, 00:18:23.172 "data_offset": 0, 00:18:23.172 "data_size": 7936 00:18:23.172 }, 00:18:23.172 { 00:18:23.172 "name": "BaseBdev2", 00:18:23.172 "uuid": "6492171c-6da9-4f8f-a50d-0c446e586bef", 00:18:23.172 "is_configured": true, 00:18:23.172 "data_offset": 256, 00:18:23.172 "data_size": 7936 00:18:23.172 } 00:18:23.172 ] 00:18:23.172 }' 00:18:23.172 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.172 12:44:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.432 [2024-12-14 12:44:23.070366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:23.432 [2024-12-14 12:44:23.070518] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.432 [2024-12-14 12:44:23.161890] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.432 [2024-12-14 12:44:23.162019] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.432 [2024-12-14 12:44:23.162036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:23.432 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 90194 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90194 ']' 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90194 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90194 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90194' 00:18:23.692 killing process with pid 90194 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 90194 00:18:23.692 [2024-12-14 12:44:23.245530] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.692 12:44:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 90194 00:18:23.692 [2024-12-14 12:44:23.261482] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:24.629 12:44:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:24.629 00:18:24.629 real 0m4.975s 00:18:24.629 user 0m7.186s 00:18:24.629 sys 0m0.816s 00:18:24.630 12:44:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.630 12:44:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.630 ************************************ 00:18:24.630 END TEST raid_state_function_test_sb_md_interleaved 00:18:24.630 ************************************ 00:18:24.890 12:44:24 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:24.890 12:44:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:24.890 12:44:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.890 12:44:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:24.890 ************************************ 00:18:24.890 START TEST raid_superblock_test_md_interleaved 00:18:24.890 ************************************ 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:24.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=90444 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 90444 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90444 ']' 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.890 12:44:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.890 [2024-12-14 12:44:24.507504] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:24.890 [2024-12-14 12:44:24.507708] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90444 ] 00:18:25.150 [2024-12-14 12:44:24.678408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.150 [2024-12-14 12:44:24.791437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.410 [2024-12-14 12:44:24.984006] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.410 [2024-12-14 12:44:24.984068] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.669 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.669 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:25.669 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.670 malloc1 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.670 [2024-12-14 12:44:25.385639] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:25.670 [2024-12-14 12:44:25.385697] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.670 [2024-12-14 12:44:25.385745] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:25.670 [2024-12-14 12:44:25.385754] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.670 [2024-12-14 12:44:25.387557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.670 [2024-12-14 12:44:25.387595] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:25.670 pt1 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.670 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.929 malloc2 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.929 [2024-12-14 12:44:25.442741] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:25.929 [2024-12-14 12:44:25.442850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.929 [2024-12-14 12:44:25.442907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:25.929 [2024-12-14 12:44:25.442946] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.929 [2024-12-14 12:44:25.444805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.929 [2024-12-14 12:44:25.444873] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:25.929 pt2 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.929 [2024-12-14 12:44:25.454764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:25.929 [2024-12-14 12:44:25.456546] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:25.929 [2024-12-14 12:44:25.456777] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:25.929 [2024-12-14 12:44:25.456820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:25.929 [2024-12-14 12:44:25.456906] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:25.929 [2024-12-14 12:44:25.457008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:25.929 [2024-12-14 12:44:25.457061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:25.929 [2024-12-14 12:44:25.457182] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:25.929 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.930 "name": "raid_bdev1", 00:18:25.930 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:25.930 "strip_size_kb": 0, 00:18:25.930 "state": "online", 00:18:25.930 "raid_level": "raid1", 00:18:25.930 "superblock": true, 00:18:25.930 "num_base_bdevs": 2, 00:18:25.930 "num_base_bdevs_discovered": 2, 00:18:25.930 "num_base_bdevs_operational": 2, 00:18:25.930 "base_bdevs_list": [ 00:18:25.930 { 00:18:25.930 "name": "pt1", 00:18:25.930 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:25.930 "is_configured": true, 00:18:25.930 "data_offset": 256, 00:18:25.930 "data_size": 7936 00:18:25.930 }, 00:18:25.930 { 00:18:25.930 "name": "pt2", 00:18:25.930 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:25.930 "is_configured": true, 00:18:25.930 "data_offset": 256, 00:18:25.930 "data_size": 7936 00:18:25.930 } 00:18:25.930 ] 00:18:25.930 }' 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.930 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:26.189 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.189 [2024-12-14 12:44:25.902302] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.448 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.448 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.448 "name": "raid_bdev1", 00:18:26.448 "aliases": [ 00:18:26.448 "6612a944-c48f-4e09-8c7d-fb8f9d42179d" 00:18:26.448 ], 00:18:26.448 "product_name": "Raid Volume", 00:18:26.448 "block_size": 4128, 00:18:26.448 "num_blocks": 7936, 00:18:26.448 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:26.448 "md_size": 32, 00:18:26.448 "md_interleave": true, 00:18:26.448 "dif_type": 0, 00:18:26.448 "assigned_rate_limits": { 00:18:26.448 "rw_ios_per_sec": 0, 00:18:26.448 "rw_mbytes_per_sec": 0, 00:18:26.448 "r_mbytes_per_sec": 0, 00:18:26.448 "w_mbytes_per_sec": 0 00:18:26.448 }, 00:18:26.448 "claimed": false, 00:18:26.448 "zoned": false, 00:18:26.448 "supported_io_types": { 00:18:26.448 "read": true, 00:18:26.448 "write": true, 00:18:26.448 "unmap": false, 00:18:26.448 "flush": false, 00:18:26.448 "reset": true, 00:18:26.448 "nvme_admin": false, 00:18:26.448 "nvme_io": false, 00:18:26.448 "nvme_io_md": false, 00:18:26.448 "write_zeroes": true, 00:18:26.448 "zcopy": false, 00:18:26.448 "get_zone_info": false, 00:18:26.448 "zone_management": false, 00:18:26.448 "zone_append": false, 00:18:26.448 "compare": false, 00:18:26.448 "compare_and_write": false, 00:18:26.448 "abort": false, 00:18:26.448 "seek_hole": false, 00:18:26.448 "seek_data": false, 00:18:26.448 "copy": false, 00:18:26.448 "nvme_iov_md": false 00:18:26.448 }, 00:18:26.448 "memory_domains": [ 00:18:26.448 { 00:18:26.449 "dma_device_id": "system", 00:18:26.449 "dma_device_type": 1 00:18:26.449 }, 00:18:26.449 { 00:18:26.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.449 "dma_device_type": 2 00:18:26.449 }, 00:18:26.449 { 00:18:26.449 "dma_device_id": "system", 00:18:26.449 "dma_device_type": 1 00:18:26.449 }, 00:18:26.449 { 00:18:26.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.449 "dma_device_type": 2 00:18:26.449 } 00:18:26.449 ], 00:18:26.449 "driver_specific": { 00:18:26.449 "raid": { 00:18:26.449 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:26.449 "strip_size_kb": 0, 00:18:26.449 "state": "online", 00:18:26.449 "raid_level": "raid1", 00:18:26.449 "superblock": true, 00:18:26.449 "num_base_bdevs": 2, 00:18:26.449 "num_base_bdevs_discovered": 2, 00:18:26.449 "num_base_bdevs_operational": 2, 00:18:26.449 "base_bdevs_list": [ 00:18:26.449 { 00:18:26.449 "name": "pt1", 00:18:26.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.449 "is_configured": true, 00:18:26.449 "data_offset": 256, 00:18:26.449 "data_size": 7936 00:18:26.449 }, 00:18:26.449 { 00:18:26.449 "name": "pt2", 00:18:26.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.449 "is_configured": true, 00:18:26.449 "data_offset": 256, 00:18:26.449 "data_size": 7936 00:18:26.449 } 00:18:26.449 ] 00:18:26.449 } 00:18:26.449 } 00:18:26.449 }' 00:18:26.449 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.449 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:26.449 pt2' 00:18:26.449 12:44:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:26.449 [2024-12-14 12:44:26.157778] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.449 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.709 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6612a944-c48f-4e09-8c7d-fb8f9d42179d 00:18:26.709 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 6612a944-c48f-4e09-8c7d-fb8f9d42179d ']' 00:18:26.709 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:26.709 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.709 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.709 [2024-12-14 12:44:26.205423] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.709 [2024-12-14 12:44:26.205449] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.709 [2024-12-14 12:44:26.205541] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.710 [2024-12-14 12:44:26.205600] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.710 [2024-12-14 12:44:26.205611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.710 [2024-12-14 12:44:26.333232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:26.710 [2024-12-14 12:44:26.335026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:26.710 [2024-12-14 12:44:26.335108] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:26.710 [2024-12-14 12:44:26.335163] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:26.710 [2024-12-14 12:44:26.335193] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.710 [2024-12-14 12:44:26.335203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:26.710 request: 00:18:26.710 { 00:18:26.710 "name": "raid_bdev1", 00:18:26.710 "raid_level": "raid1", 00:18:26.710 "base_bdevs": [ 00:18:26.710 "malloc1", 00:18:26.710 "malloc2" 00:18:26.710 ], 00:18:26.710 "superblock": false, 00:18:26.710 "method": "bdev_raid_create", 00:18:26.710 "req_id": 1 00:18:26.710 } 00:18:26.710 Got JSON-RPC error response 00:18:26.710 response: 00:18:26.710 { 00:18:26.710 "code": -17, 00:18:26.710 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:26.710 } 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.710 [2024-12-14 12:44:26.389113] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:26.710 [2024-12-14 12:44:26.389201] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.710 [2024-12-14 12:44:26.389250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:26.710 [2024-12-14 12:44:26.389304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.710 [2024-12-14 12:44:26.391249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.710 [2024-12-14 12:44:26.391332] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:26.710 [2024-12-14 12:44:26.391405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:26.710 [2024-12-14 12:44:26.391478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:26.710 pt1 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.710 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.711 "name": "raid_bdev1", 00:18:26.711 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:26.711 "strip_size_kb": 0, 00:18:26.711 "state": "configuring", 00:18:26.711 "raid_level": "raid1", 00:18:26.711 "superblock": true, 00:18:26.711 "num_base_bdevs": 2, 00:18:26.711 "num_base_bdevs_discovered": 1, 00:18:26.711 "num_base_bdevs_operational": 2, 00:18:26.711 "base_bdevs_list": [ 00:18:26.711 { 00:18:26.711 "name": "pt1", 00:18:26.711 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.711 "is_configured": true, 00:18:26.711 "data_offset": 256, 00:18:26.711 "data_size": 7936 00:18:26.711 }, 00:18:26.711 { 00:18:26.711 "name": null, 00:18:26.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.711 "is_configured": false, 00:18:26.711 "data_offset": 256, 00:18:26.711 "data_size": 7936 00:18:26.711 } 00:18:26.711 ] 00:18:26.711 }' 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.711 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.280 [2024-12-14 12:44:26.764496] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:27.280 [2024-12-14 12:44:26.764614] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.280 [2024-12-14 12:44:26.764654] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:27.280 [2024-12-14 12:44:26.764684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.280 [2024-12-14 12:44:26.764893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.280 [2024-12-14 12:44:26.764940] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:27.280 [2024-12-14 12:44:26.765020] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:27.280 [2024-12-14 12:44:26.765085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.280 [2024-12-14 12:44:26.765203] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:27.280 [2024-12-14 12:44:26.765242] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.280 [2024-12-14 12:44:26.765332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:27.280 [2024-12-14 12:44:26.765429] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:27.280 [2024-12-14 12:44:26.765465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:27.280 [2024-12-14 12:44:26.765564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.280 pt2 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.280 "name": "raid_bdev1", 00:18:27.280 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:27.280 "strip_size_kb": 0, 00:18:27.280 "state": "online", 00:18:27.280 "raid_level": "raid1", 00:18:27.280 "superblock": true, 00:18:27.280 "num_base_bdevs": 2, 00:18:27.280 "num_base_bdevs_discovered": 2, 00:18:27.280 "num_base_bdevs_operational": 2, 00:18:27.280 "base_bdevs_list": [ 00:18:27.280 { 00:18:27.280 "name": "pt1", 00:18:27.280 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.280 "is_configured": true, 00:18:27.280 "data_offset": 256, 00:18:27.280 "data_size": 7936 00:18:27.280 }, 00:18:27.280 { 00:18:27.280 "name": "pt2", 00:18:27.280 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.280 "is_configured": true, 00:18:27.280 "data_offset": 256, 00:18:27.280 "data_size": 7936 00:18:27.280 } 00:18:27.280 ] 00:18:27.280 }' 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.280 12:44:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.540 [2024-12-14 12:44:27.212000] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.540 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.540 "name": "raid_bdev1", 00:18:27.540 "aliases": [ 00:18:27.540 "6612a944-c48f-4e09-8c7d-fb8f9d42179d" 00:18:27.540 ], 00:18:27.540 "product_name": "Raid Volume", 00:18:27.540 "block_size": 4128, 00:18:27.540 "num_blocks": 7936, 00:18:27.540 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:27.540 "md_size": 32, 00:18:27.540 "md_interleave": true, 00:18:27.540 "dif_type": 0, 00:18:27.540 "assigned_rate_limits": { 00:18:27.540 "rw_ios_per_sec": 0, 00:18:27.540 "rw_mbytes_per_sec": 0, 00:18:27.540 "r_mbytes_per_sec": 0, 00:18:27.540 "w_mbytes_per_sec": 0 00:18:27.540 }, 00:18:27.540 "claimed": false, 00:18:27.540 "zoned": false, 00:18:27.540 "supported_io_types": { 00:18:27.540 "read": true, 00:18:27.540 "write": true, 00:18:27.540 "unmap": false, 00:18:27.540 "flush": false, 00:18:27.540 "reset": true, 00:18:27.540 "nvme_admin": false, 00:18:27.540 "nvme_io": false, 00:18:27.540 "nvme_io_md": false, 00:18:27.540 "write_zeroes": true, 00:18:27.540 "zcopy": false, 00:18:27.540 "get_zone_info": false, 00:18:27.540 "zone_management": false, 00:18:27.540 "zone_append": false, 00:18:27.540 "compare": false, 00:18:27.540 "compare_and_write": false, 00:18:27.540 "abort": false, 00:18:27.540 "seek_hole": false, 00:18:27.540 "seek_data": false, 00:18:27.540 "copy": false, 00:18:27.540 "nvme_iov_md": false 00:18:27.540 }, 00:18:27.540 "memory_domains": [ 00:18:27.540 { 00:18:27.540 "dma_device_id": "system", 00:18:27.540 "dma_device_type": 1 00:18:27.540 }, 00:18:27.540 { 00:18:27.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.540 "dma_device_type": 2 00:18:27.540 }, 00:18:27.540 { 00:18:27.540 "dma_device_id": "system", 00:18:27.540 "dma_device_type": 1 00:18:27.540 }, 00:18:27.540 { 00:18:27.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.540 "dma_device_type": 2 00:18:27.540 } 00:18:27.540 ], 00:18:27.540 "driver_specific": { 00:18:27.540 "raid": { 00:18:27.540 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:27.540 "strip_size_kb": 0, 00:18:27.540 "state": "online", 00:18:27.540 "raid_level": "raid1", 00:18:27.540 "superblock": true, 00:18:27.540 "num_base_bdevs": 2, 00:18:27.540 "num_base_bdevs_discovered": 2, 00:18:27.540 "num_base_bdevs_operational": 2, 00:18:27.540 "base_bdevs_list": [ 00:18:27.540 { 00:18:27.540 "name": "pt1", 00:18:27.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.540 "is_configured": true, 00:18:27.540 "data_offset": 256, 00:18:27.540 "data_size": 7936 00:18:27.540 }, 00:18:27.540 { 00:18:27.540 "name": "pt2", 00:18:27.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.540 "is_configured": true, 00:18:27.540 "data_offset": 256, 00:18:27.540 "data_size": 7936 00:18:27.540 } 00:18:27.540 ] 00:18:27.541 } 00:18:27.541 } 00:18:27.541 }' 00:18:27.541 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:27.801 pt2' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.801 [2024-12-14 12:44:27.439628] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 6612a944-c48f-4e09-8c7d-fb8f9d42179d '!=' 6612a944-c48f-4e09-8c7d-fb8f9d42179d ']' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.801 [2024-12-14 12:44:27.491326] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.801 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.061 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.061 "name": "raid_bdev1", 00:18:28.061 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:28.061 "strip_size_kb": 0, 00:18:28.061 "state": "online", 00:18:28.061 "raid_level": "raid1", 00:18:28.061 "superblock": true, 00:18:28.061 "num_base_bdevs": 2, 00:18:28.061 "num_base_bdevs_discovered": 1, 00:18:28.061 "num_base_bdevs_operational": 1, 00:18:28.061 "base_bdevs_list": [ 00:18:28.061 { 00:18:28.061 "name": null, 00:18:28.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.061 "is_configured": false, 00:18:28.061 "data_offset": 0, 00:18:28.061 "data_size": 7936 00:18:28.061 }, 00:18:28.061 { 00:18:28.061 "name": "pt2", 00:18:28.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.061 "is_configured": true, 00:18:28.061 "data_offset": 256, 00:18:28.061 "data_size": 7936 00:18:28.061 } 00:18:28.061 ] 00:18:28.061 }' 00:18:28.061 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.061 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.320 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:28.320 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.320 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.320 [2024-12-14 12:44:27.966443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.320 [2024-12-14 12:44:27.966472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.321 [2024-12-14 12:44:27.966552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.321 [2024-12-14 12:44:27.966611] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.321 [2024-12-14 12:44:27.966623] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:28.321 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.321 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.321 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.321 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.321 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:28.321 12:44:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.321 [2024-12-14 12:44:28.038307] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.321 [2024-12-14 12:44:28.038404] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.321 [2024-12-14 12:44:28.038440] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:28.321 [2024-12-14 12:44:28.038459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.321 [2024-12-14 12:44:28.040351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.321 [2024-12-14 12:44:28.040390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.321 [2024-12-14 12:44:28.040443] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:28.321 [2024-12-14 12:44:28.040491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.321 [2024-12-14 12:44:28.040568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:28.321 [2024-12-14 12:44:28.040579] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:28.321 [2024-12-14 12:44:28.040666] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:28.321 [2024-12-14 12:44:28.040727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:28.321 [2024-12-14 12:44:28.040734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:28.321 [2024-12-14 12:44:28.040813] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.321 pt2 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.321 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.581 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.581 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.581 "name": "raid_bdev1", 00:18:28.581 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:28.581 "strip_size_kb": 0, 00:18:28.581 "state": "online", 00:18:28.581 "raid_level": "raid1", 00:18:28.581 "superblock": true, 00:18:28.581 "num_base_bdevs": 2, 00:18:28.581 "num_base_bdevs_discovered": 1, 00:18:28.581 "num_base_bdevs_operational": 1, 00:18:28.581 "base_bdevs_list": [ 00:18:28.581 { 00:18:28.581 "name": null, 00:18:28.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.581 "is_configured": false, 00:18:28.581 "data_offset": 256, 00:18:28.581 "data_size": 7936 00:18:28.581 }, 00:18:28.581 { 00:18:28.581 "name": "pt2", 00:18:28.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.581 "is_configured": true, 00:18:28.581 "data_offset": 256, 00:18:28.581 "data_size": 7936 00:18:28.581 } 00:18:28.581 ] 00:18:28.581 }' 00:18:28.581 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.581 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.841 [2024-12-14 12:44:28.513556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.841 [2024-12-14 12:44:28.513725] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.841 [2024-12-14 12:44:28.513875] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.841 [2024-12-14 12:44:28.513986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.841 [2024-12-14 12:44:28.514052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.841 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.841 [2024-12-14 12:44:28.573421] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:28.841 [2024-12-14 12:44:28.573548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.841 [2024-12-14 12:44:28.573603] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:28.841 [2024-12-14 12:44:28.573641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.841 [2024-12-14 12:44:28.575998] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.841 [2024-12-14 12:44:28.576120] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:28.841 [2024-12-14 12:44:28.576237] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:28.841 [2024-12-14 12:44:28.576333] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:28.841 [2024-12-14 12:44:28.576488] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:28.841 [2024-12-14 12:44:28.576543] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.841 [2024-12-14 12:44:28.576628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:28.841 [2024-12-14 12:44:28.576728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.841 [2024-12-14 12:44:28.576857] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:28.841 [2024-12-14 12:44:28.576899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:28.841 [2024-12-14 12:44:28.577013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:28.841 [2024-12-14 12:44:28.577141] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:28.841 [2024-12-14 12:44:28.577181] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:28.841 [2024-12-14 12:44:28.577352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.101 pt1 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.101 "name": "raid_bdev1", 00:18:29.101 "uuid": "6612a944-c48f-4e09-8c7d-fb8f9d42179d", 00:18:29.101 "strip_size_kb": 0, 00:18:29.101 "state": "online", 00:18:29.101 "raid_level": "raid1", 00:18:29.101 "superblock": true, 00:18:29.101 "num_base_bdevs": 2, 00:18:29.101 "num_base_bdevs_discovered": 1, 00:18:29.101 "num_base_bdevs_operational": 1, 00:18:29.101 "base_bdevs_list": [ 00:18:29.101 { 00:18:29.101 "name": null, 00:18:29.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.101 "is_configured": false, 00:18:29.101 "data_offset": 256, 00:18:29.101 "data_size": 7936 00:18:29.101 }, 00:18:29.101 { 00:18:29.101 "name": "pt2", 00:18:29.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.101 "is_configured": true, 00:18:29.101 "data_offset": 256, 00:18:29.101 "data_size": 7936 00:18:29.101 } 00:18:29.101 ] 00:18:29.101 }' 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.101 12:44:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.361 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:29.362 [2024-12-14 12:44:29.080799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.362 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.621 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 6612a944-c48f-4e09-8c7d-fb8f9d42179d '!=' 6612a944-c48f-4e09-8c7d-fb8f9d42179d ']' 00:18:29.621 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 90444 00:18:29.621 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90444 ']' 00:18:29.621 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90444 00:18:29.621 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:29.621 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.622 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90444 00:18:29.622 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.622 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.622 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90444' 00:18:29.622 killing process with pid 90444 00:18:29.622 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 90444 00:18:29.622 [2024-12-14 12:44:29.156648] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:29.622 [2024-12-14 12:44:29.156738] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.622 [2024-12-14 12:44:29.156789] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.622 [2024-12-14 12:44:29.156806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:29.622 12:44:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 90444 00:18:29.881 [2024-12-14 12:44:29.373329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:30.821 12:44:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:30.821 00:18:30.821 real 0m6.130s 00:18:30.821 user 0m9.223s 00:18:30.821 sys 0m1.071s 00:18:30.821 12:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:30.821 ************************************ 00:18:30.821 END TEST raid_superblock_test_md_interleaved 00:18:30.821 ************************************ 00:18:30.821 12:44:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.107 12:44:30 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:31.107 12:44:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:31.107 12:44:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.107 12:44:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.107 ************************************ 00:18:31.107 START TEST raid_rebuild_test_sb_md_interleaved 00:18:31.107 ************************************ 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:31.107 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=90774 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 90774 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 90774 ']' 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.108 12:44:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.108 [2024-12-14 12:44:30.719854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:31.108 [2024-12-14 12:44:30.720061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:31.108 Zero copy mechanism will not be used. 00:18:31.108 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90774 ] 00:18:31.368 [2024-12-14 12:44:30.889722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.368 [2024-12-14 12:44:31.027967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.630 [2024-12-14 12:44:31.250109] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.630 [2024-12-14 12:44:31.250196] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.889 BaseBdev1_malloc 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.889 [2024-12-14 12:44:31.602101] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:31.889 [2024-12-14 12:44:31.602184] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:31.889 [2024-12-14 12:44:31.602211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:31.889 [2024-12-14 12:44:31.602225] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:31.889 [2024-12-14 12:44:31.604265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:31.889 [2024-12-14 12:44:31.604399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:31.889 BaseBdev1 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.889 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.149 BaseBdev2_malloc 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.149 [2024-12-14 12:44:31.659217] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:32.149 [2024-12-14 12:44:31.659375] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.149 [2024-12-14 12:44:31.659404] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:32.149 [2024-12-14 12:44:31.659420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.149 [2024-12-14 12:44:31.661498] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.149 [2024-12-14 12:44:31.661542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:32.149 BaseBdev2 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.149 spare_malloc 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.149 spare_delay 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.149 [2024-12-14 12:44:31.741147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:32.149 [2024-12-14 12:44:31.741213] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.149 [2024-12-14 12:44:31.741235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:32.149 [2024-12-14 12:44:31.741249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.149 [2024-12-14 12:44:31.743337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.149 [2024-12-14 12:44:31.743384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:32.149 spare 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.149 [2024-12-14 12:44:31.753160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.149 [2024-12-14 12:44:31.755254] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.149 [2024-12-14 12:44:31.755460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:32.149 [2024-12-14 12:44:31.755477] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:32.149 [2024-12-14 12:44:31.755549] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:32.149 [2024-12-14 12:44:31.755626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:32.149 [2024-12-14 12:44:31.755635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:32.149 [2024-12-14 12:44:31.755706] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.149 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.150 "name": "raid_bdev1", 00:18:32.150 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:32.150 "strip_size_kb": 0, 00:18:32.150 "state": "online", 00:18:32.150 "raid_level": "raid1", 00:18:32.150 "superblock": true, 00:18:32.150 "num_base_bdevs": 2, 00:18:32.150 "num_base_bdevs_discovered": 2, 00:18:32.150 "num_base_bdevs_operational": 2, 00:18:32.150 "base_bdevs_list": [ 00:18:32.150 { 00:18:32.150 "name": "BaseBdev1", 00:18:32.150 "uuid": "635a598f-5b02-5a32-89fd-cee8f55fb562", 00:18:32.150 "is_configured": true, 00:18:32.150 "data_offset": 256, 00:18:32.150 "data_size": 7936 00:18:32.150 }, 00:18:32.150 { 00:18:32.150 "name": "BaseBdev2", 00:18:32.150 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:32.150 "is_configured": true, 00:18:32.150 "data_offset": 256, 00:18:32.150 "data_size": 7936 00:18:32.150 } 00:18:32.150 ] 00:18:32.150 }' 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.150 12:44:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:32.720 [2024-12-14 12:44:32.196740] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.720 [2024-12-14 12:44:32.272237] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.720 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.721 "name": "raid_bdev1", 00:18:32.721 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:32.721 "strip_size_kb": 0, 00:18:32.721 "state": "online", 00:18:32.721 "raid_level": "raid1", 00:18:32.721 "superblock": true, 00:18:32.721 "num_base_bdevs": 2, 00:18:32.721 "num_base_bdevs_discovered": 1, 00:18:32.721 "num_base_bdevs_operational": 1, 00:18:32.721 "base_bdevs_list": [ 00:18:32.721 { 00:18:32.721 "name": null, 00:18:32.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.721 "is_configured": false, 00:18:32.721 "data_offset": 0, 00:18:32.721 "data_size": 7936 00:18:32.721 }, 00:18:32.721 { 00:18:32.721 "name": "BaseBdev2", 00:18:32.721 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:32.721 "is_configured": true, 00:18:32.721 "data_offset": 256, 00:18:32.721 "data_size": 7936 00:18:32.721 } 00:18:32.721 ] 00:18:32.721 }' 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.721 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.290 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:33.290 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.290 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.290 [2024-12-14 12:44:32.735535] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.290 [2024-12-14 12:44:32.753559] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:33.290 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.290 12:44:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:33.290 [2024-12-14 12:44:32.755677] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.230 "name": "raid_bdev1", 00:18:34.230 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:34.230 "strip_size_kb": 0, 00:18:34.230 "state": "online", 00:18:34.230 "raid_level": "raid1", 00:18:34.230 "superblock": true, 00:18:34.230 "num_base_bdevs": 2, 00:18:34.230 "num_base_bdevs_discovered": 2, 00:18:34.230 "num_base_bdevs_operational": 2, 00:18:34.230 "process": { 00:18:34.230 "type": "rebuild", 00:18:34.230 "target": "spare", 00:18:34.230 "progress": { 00:18:34.230 "blocks": 2560, 00:18:34.230 "percent": 32 00:18:34.230 } 00:18:34.230 }, 00:18:34.230 "base_bdevs_list": [ 00:18:34.230 { 00:18:34.230 "name": "spare", 00:18:34.230 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:34.230 "is_configured": true, 00:18:34.230 "data_offset": 256, 00:18:34.230 "data_size": 7936 00:18:34.230 }, 00:18:34.230 { 00:18:34.230 "name": "BaseBdev2", 00:18:34.230 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:34.230 "is_configured": true, 00:18:34.230 "data_offset": 256, 00:18:34.230 "data_size": 7936 00:18:34.230 } 00:18:34.230 ] 00:18:34.230 }' 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.230 12:44:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.230 [2024-12-14 12:44:33.922854] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.490 [2024-12-14 12:44:33.966343] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:34.490 [2024-12-14 12:44:33.966453] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.490 [2024-12-14 12:44:33.966472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.490 [2024-12-14 12:44:33.966490] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.490 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.490 "name": "raid_bdev1", 00:18:34.490 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:34.490 "strip_size_kb": 0, 00:18:34.490 "state": "online", 00:18:34.490 "raid_level": "raid1", 00:18:34.490 "superblock": true, 00:18:34.490 "num_base_bdevs": 2, 00:18:34.490 "num_base_bdevs_discovered": 1, 00:18:34.490 "num_base_bdevs_operational": 1, 00:18:34.490 "base_bdevs_list": [ 00:18:34.490 { 00:18:34.490 "name": null, 00:18:34.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.490 "is_configured": false, 00:18:34.490 "data_offset": 0, 00:18:34.491 "data_size": 7936 00:18:34.491 }, 00:18:34.491 { 00:18:34.491 "name": "BaseBdev2", 00:18:34.491 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:34.491 "is_configured": true, 00:18:34.491 "data_offset": 256, 00:18:34.491 "data_size": 7936 00:18:34.491 } 00:18:34.491 ] 00:18:34.491 }' 00:18:34.491 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.491 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.749 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.009 "name": "raid_bdev1", 00:18:35.009 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:35.009 "strip_size_kb": 0, 00:18:35.009 "state": "online", 00:18:35.009 "raid_level": "raid1", 00:18:35.009 "superblock": true, 00:18:35.009 "num_base_bdevs": 2, 00:18:35.009 "num_base_bdevs_discovered": 1, 00:18:35.009 "num_base_bdevs_operational": 1, 00:18:35.009 "base_bdevs_list": [ 00:18:35.009 { 00:18:35.009 "name": null, 00:18:35.009 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.009 "is_configured": false, 00:18:35.009 "data_offset": 0, 00:18:35.009 "data_size": 7936 00:18:35.009 }, 00:18:35.009 { 00:18:35.009 "name": "BaseBdev2", 00:18:35.009 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:35.009 "is_configured": true, 00:18:35.009 "data_offset": 256, 00:18:35.009 "data_size": 7936 00:18:35.009 } 00:18:35.009 ] 00:18:35.009 }' 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.009 [2024-12-14 12:44:34.606306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.009 [2024-12-14 12:44:34.624565] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.009 12:44:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:35.009 [2024-12-14 12:44:34.626867] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.946 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.206 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.207 "name": "raid_bdev1", 00:18:36.207 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:36.207 "strip_size_kb": 0, 00:18:36.207 "state": "online", 00:18:36.207 "raid_level": "raid1", 00:18:36.207 "superblock": true, 00:18:36.207 "num_base_bdevs": 2, 00:18:36.207 "num_base_bdevs_discovered": 2, 00:18:36.207 "num_base_bdevs_operational": 2, 00:18:36.207 "process": { 00:18:36.207 "type": "rebuild", 00:18:36.207 "target": "spare", 00:18:36.207 "progress": { 00:18:36.207 "blocks": 2560, 00:18:36.207 "percent": 32 00:18:36.207 } 00:18:36.207 }, 00:18:36.207 "base_bdevs_list": [ 00:18:36.207 { 00:18:36.207 "name": "spare", 00:18:36.207 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:36.207 "is_configured": true, 00:18:36.207 "data_offset": 256, 00:18:36.207 "data_size": 7936 00:18:36.207 }, 00:18:36.207 { 00:18:36.207 "name": "BaseBdev2", 00:18:36.207 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:36.207 "is_configured": true, 00:18:36.207 "data_offset": 256, 00:18:36.207 "data_size": 7936 00:18:36.207 } 00:18:36.207 ] 00:18:36.207 }' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:36.207 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=730 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.207 "name": "raid_bdev1", 00:18:36.207 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:36.207 "strip_size_kb": 0, 00:18:36.207 "state": "online", 00:18:36.207 "raid_level": "raid1", 00:18:36.207 "superblock": true, 00:18:36.207 "num_base_bdevs": 2, 00:18:36.207 "num_base_bdevs_discovered": 2, 00:18:36.207 "num_base_bdevs_operational": 2, 00:18:36.207 "process": { 00:18:36.207 "type": "rebuild", 00:18:36.207 "target": "spare", 00:18:36.207 "progress": { 00:18:36.207 "blocks": 2816, 00:18:36.207 "percent": 35 00:18:36.207 } 00:18:36.207 }, 00:18:36.207 "base_bdevs_list": [ 00:18:36.207 { 00:18:36.207 "name": "spare", 00:18:36.207 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:36.207 "is_configured": true, 00:18:36.207 "data_offset": 256, 00:18:36.207 "data_size": 7936 00:18:36.207 }, 00:18:36.207 { 00:18:36.207 "name": "BaseBdev2", 00:18:36.207 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:36.207 "is_configured": true, 00:18:36.207 "data_offset": 256, 00:18:36.207 "data_size": 7936 00:18:36.207 } 00:18:36.207 ] 00:18:36.207 }' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.207 12:44:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.586 "name": "raid_bdev1", 00:18:37.586 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:37.586 "strip_size_kb": 0, 00:18:37.586 "state": "online", 00:18:37.586 "raid_level": "raid1", 00:18:37.586 "superblock": true, 00:18:37.586 "num_base_bdevs": 2, 00:18:37.586 "num_base_bdevs_discovered": 2, 00:18:37.586 "num_base_bdevs_operational": 2, 00:18:37.586 "process": { 00:18:37.586 "type": "rebuild", 00:18:37.586 "target": "spare", 00:18:37.586 "progress": { 00:18:37.586 "blocks": 5632, 00:18:37.586 "percent": 70 00:18:37.586 } 00:18:37.586 }, 00:18:37.586 "base_bdevs_list": [ 00:18:37.586 { 00:18:37.586 "name": "spare", 00:18:37.586 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:37.586 "is_configured": true, 00:18:37.586 "data_offset": 256, 00:18:37.586 "data_size": 7936 00:18:37.586 }, 00:18:37.586 { 00:18:37.586 "name": "BaseBdev2", 00:18:37.586 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:37.586 "is_configured": true, 00:18:37.586 "data_offset": 256, 00:18:37.586 "data_size": 7936 00:18:37.586 } 00:18:37.586 ] 00:18:37.586 }' 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.586 12:44:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.586 12:44:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.586 12:44:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.153 [2024-12-14 12:44:37.746033] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:38.153 [2024-12-14 12:44:37.746113] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:38.153 [2024-12-14 12:44:37.746212] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.412 "name": "raid_bdev1", 00:18:38.412 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:38.412 "strip_size_kb": 0, 00:18:38.412 "state": "online", 00:18:38.412 "raid_level": "raid1", 00:18:38.412 "superblock": true, 00:18:38.412 "num_base_bdevs": 2, 00:18:38.412 "num_base_bdevs_discovered": 2, 00:18:38.412 "num_base_bdevs_operational": 2, 00:18:38.412 "base_bdevs_list": [ 00:18:38.412 { 00:18:38.412 "name": "spare", 00:18:38.412 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:38.412 "is_configured": true, 00:18:38.412 "data_offset": 256, 00:18:38.412 "data_size": 7936 00:18:38.412 }, 00:18:38.412 { 00:18:38.412 "name": "BaseBdev2", 00:18:38.412 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:38.412 "is_configured": true, 00:18:38.412 "data_offset": 256, 00:18:38.412 "data_size": 7936 00:18:38.412 } 00:18:38.412 ] 00:18:38.412 }' 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:38.412 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.672 "name": "raid_bdev1", 00:18:38.672 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:38.672 "strip_size_kb": 0, 00:18:38.672 "state": "online", 00:18:38.672 "raid_level": "raid1", 00:18:38.672 "superblock": true, 00:18:38.672 "num_base_bdevs": 2, 00:18:38.672 "num_base_bdevs_discovered": 2, 00:18:38.672 "num_base_bdevs_operational": 2, 00:18:38.672 "base_bdevs_list": [ 00:18:38.672 { 00:18:38.672 "name": "spare", 00:18:38.672 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:38.672 "is_configured": true, 00:18:38.672 "data_offset": 256, 00:18:38.672 "data_size": 7936 00:18:38.672 }, 00:18:38.672 { 00:18:38.672 "name": "BaseBdev2", 00:18:38.672 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:38.672 "is_configured": true, 00:18:38.672 "data_offset": 256, 00:18:38.672 "data_size": 7936 00:18:38.672 } 00:18:38.672 ] 00:18:38.672 }' 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.672 "name": "raid_bdev1", 00:18:38.672 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:38.672 "strip_size_kb": 0, 00:18:38.672 "state": "online", 00:18:38.672 "raid_level": "raid1", 00:18:38.672 "superblock": true, 00:18:38.672 "num_base_bdevs": 2, 00:18:38.672 "num_base_bdevs_discovered": 2, 00:18:38.672 "num_base_bdevs_operational": 2, 00:18:38.672 "base_bdevs_list": [ 00:18:38.672 { 00:18:38.672 "name": "spare", 00:18:38.672 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:38.672 "is_configured": true, 00:18:38.672 "data_offset": 256, 00:18:38.672 "data_size": 7936 00:18:38.672 }, 00:18:38.672 { 00:18:38.672 "name": "BaseBdev2", 00:18:38.672 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:38.672 "is_configured": true, 00:18:38.672 "data_offset": 256, 00:18:38.672 "data_size": 7936 00:18:38.672 } 00:18:38.672 ] 00:18:38.672 }' 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.672 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.240 [2024-12-14 12:44:38.718188] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.240 [2024-12-14 12:44:38.718268] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.240 [2024-12-14 12:44:38.718376] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.240 [2024-12-14 12:44:38.718462] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.240 [2024-12-14 12:44:38.718509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.240 [2024-12-14 12:44:38.770118] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:39.240 [2024-12-14 12:44:38.770166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.240 [2024-12-14 12:44:38.770188] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:39.240 [2024-12-14 12:44:38.770198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.240 [2024-12-14 12:44:38.772191] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.240 [2024-12-14 12:44:38.772228] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:39.240 [2024-12-14 12:44:38.772285] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:39.240 [2024-12-14 12:44:38.772334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.240 [2024-12-14 12:44:38.772451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.240 spare 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.240 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.240 [2024-12-14 12:44:38.872356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:39.241 [2024-12-14 12:44:38.872424] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:39.241 [2024-12-14 12:44:38.872539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:39.241 [2024-12-14 12:44:38.872646] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:39.241 [2024-12-14 12:44:38.872657] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:39.241 [2024-12-14 12:44:38.872740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.241 "name": "raid_bdev1", 00:18:39.241 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:39.241 "strip_size_kb": 0, 00:18:39.241 "state": "online", 00:18:39.241 "raid_level": "raid1", 00:18:39.241 "superblock": true, 00:18:39.241 "num_base_bdevs": 2, 00:18:39.241 "num_base_bdevs_discovered": 2, 00:18:39.241 "num_base_bdevs_operational": 2, 00:18:39.241 "base_bdevs_list": [ 00:18:39.241 { 00:18:39.241 "name": "spare", 00:18:39.241 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:39.241 "is_configured": true, 00:18:39.241 "data_offset": 256, 00:18:39.241 "data_size": 7936 00:18:39.241 }, 00:18:39.241 { 00:18:39.241 "name": "BaseBdev2", 00:18:39.241 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:39.241 "is_configured": true, 00:18:39.241 "data_offset": 256, 00:18:39.241 "data_size": 7936 00:18:39.241 } 00:18:39.241 ] 00:18:39.241 }' 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.241 12:44:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.501 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:39.501 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:39.501 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:39.501 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:39.501 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:39.760 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:39.761 "name": "raid_bdev1", 00:18:39.761 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:39.761 "strip_size_kb": 0, 00:18:39.761 "state": "online", 00:18:39.761 "raid_level": "raid1", 00:18:39.761 "superblock": true, 00:18:39.761 "num_base_bdevs": 2, 00:18:39.761 "num_base_bdevs_discovered": 2, 00:18:39.761 "num_base_bdevs_operational": 2, 00:18:39.761 "base_bdevs_list": [ 00:18:39.761 { 00:18:39.761 "name": "spare", 00:18:39.761 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:39.761 "is_configured": true, 00:18:39.761 "data_offset": 256, 00:18:39.761 "data_size": 7936 00:18:39.761 }, 00:18:39.761 { 00:18:39.761 "name": "BaseBdev2", 00:18:39.761 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:39.761 "is_configured": true, 00:18:39.761 "data_offset": 256, 00:18:39.761 "data_size": 7936 00:18:39.761 } 00:18:39.761 ] 00:18:39.761 }' 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.761 [2024-12-14 12:44:39.421054] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.761 "name": "raid_bdev1", 00:18:39.761 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:39.761 "strip_size_kb": 0, 00:18:39.761 "state": "online", 00:18:39.761 "raid_level": "raid1", 00:18:39.761 "superblock": true, 00:18:39.761 "num_base_bdevs": 2, 00:18:39.761 "num_base_bdevs_discovered": 1, 00:18:39.761 "num_base_bdevs_operational": 1, 00:18:39.761 "base_bdevs_list": [ 00:18:39.761 { 00:18:39.761 "name": null, 00:18:39.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.761 "is_configured": false, 00:18:39.761 "data_offset": 0, 00:18:39.761 "data_size": 7936 00:18:39.761 }, 00:18:39.761 { 00:18:39.761 "name": "BaseBdev2", 00:18:39.761 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:39.761 "is_configured": true, 00:18:39.761 "data_offset": 256, 00:18:39.761 "data_size": 7936 00:18:39.761 } 00:18:39.761 ] 00:18:39.761 }' 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.761 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.328 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:40.328 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.328 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.328 [2024-12-14 12:44:39.864279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.328 [2024-12-14 12:44:39.864477] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:40.328 [2024-12-14 12:44:39.864495] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:40.328 [2024-12-14 12:44:39.864534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.328 [2024-12-14 12:44:39.880315] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:40.328 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.328 12:44:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:40.328 [2024-12-14 12:44:39.882186] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.266 "name": "raid_bdev1", 00:18:41.266 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:41.266 "strip_size_kb": 0, 00:18:41.266 "state": "online", 00:18:41.266 "raid_level": "raid1", 00:18:41.266 "superblock": true, 00:18:41.266 "num_base_bdevs": 2, 00:18:41.266 "num_base_bdevs_discovered": 2, 00:18:41.266 "num_base_bdevs_operational": 2, 00:18:41.266 "process": { 00:18:41.266 "type": "rebuild", 00:18:41.266 "target": "spare", 00:18:41.266 "progress": { 00:18:41.266 "blocks": 2560, 00:18:41.266 "percent": 32 00:18:41.266 } 00:18:41.266 }, 00:18:41.266 "base_bdevs_list": [ 00:18:41.266 { 00:18:41.266 "name": "spare", 00:18:41.266 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:41.266 "is_configured": true, 00:18:41.266 "data_offset": 256, 00:18:41.266 "data_size": 7936 00:18:41.266 }, 00:18:41.266 { 00:18:41.266 "name": "BaseBdev2", 00:18:41.266 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:41.266 "is_configured": true, 00:18:41.266 "data_offset": 256, 00:18:41.266 "data_size": 7936 00:18:41.266 } 00:18:41.266 ] 00:18:41.266 }' 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.266 12:44:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.525 [2024-12-14 12:44:41.018999] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.525 [2024-12-14 12:44:41.088018] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:41.525 [2024-12-14 12:44:41.088109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.525 [2024-12-14 12:44:41.088125] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.525 [2024-12-14 12:44:41.088133] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.525 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.526 "name": "raid_bdev1", 00:18:41.526 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:41.526 "strip_size_kb": 0, 00:18:41.526 "state": "online", 00:18:41.526 "raid_level": "raid1", 00:18:41.526 "superblock": true, 00:18:41.526 "num_base_bdevs": 2, 00:18:41.526 "num_base_bdevs_discovered": 1, 00:18:41.526 "num_base_bdevs_operational": 1, 00:18:41.526 "base_bdevs_list": [ 00:18:41.526 { 00:18:41.526 "name": null, 00:18:41.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.526 "is_configured": false, 00:18:41.526 "data_offset": 0, 00:18:41.526 "data_size": 7936 00:18:41.526 }, 00:18:41.526 { 00:18:41.526 "name": "BaseBdev2", 00:18:41.526 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:41.526 "is_configured": true, 00:18:41.526 "data_offset": 256, 00:18:41.526 "data_size": 7936 00:18:41.526 } 00:18:41.526 ] 00:18:41.526 }' 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.526 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.095 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:42.095 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.095 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.095 [2024-12-14 12:44:41.532378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:42.095 [2024-12-14 12:44:41.532504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.095 [2024-12-14 12:44:41.532548] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:42.095 [2024-12-14 12:44:41.532582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.095 [2024-12-14 12:44:41.532792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.095 [2024-12-14 12:44:41.532840] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:42.095 [2024-12-14 12:44:41.532916] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:42.095 [2024-12-14 12:44:41.532952] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:42.095 [2024-12-14 12:44:41.532989] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:42.095 [2024-12-14 12:44:41.533050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.095 [2024-12-14 12:44:41.548523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:42.095 spare 00:18:42.095 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.095 [2024-12-14 12:44:41.550323] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:42.095 12:44:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.032 "name": "raid_bdev1", 00:18:43.032 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:43.032 "strip_size_kb": 0, 00:18:43.032 "state": "online", 00:18:43.032 "raid_level": "raid1", 00:18:43.032 "superblock": true, 00:18:43.032 "num_base_bdevs": 2, 00:18:43.032 "num_base_bdevs_discovered": 2, 00:18:43.032 "num_base_bdevs_operational": 2, 00:18:43.032 "process": { 00:18:43.032 "type": "rebuild", 00:18:43.032 "target": "spare", 00:18:43.032 "progress": { 00:18:43.032 "blocks": 2560, 00:18:43.032 "percent": 32 00:18:43.032 } 00:18:43.032 }, 00:18:43.032 "base_bdevs_list": [ 00:18:43.032 { 00:18:43.032 "name": "spare", 00:18:43.032 "uuid": "d7508128-3dfe-56b9-9164-e73a016a0bac", 00:18:43.032 "is_configured": true, 00:18:43.032 "data_offset": 256, 00:18:43.032 "data_size": 7936 00:18:43.032 }, 00:18:43.032 { 00:18:43.032 "name": "BaseBdev2", 00:18:43.032 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:43.032 "is_configured": true, 00:18:43.032 "data_offset": 256, 00:18:43.032 "data_size": 7936 00:18:43.032 } 00:18:43.032 ] 00:18:43.032 }' 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.032 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.032 [2024-12-14 12:44:42.686772] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.032 [2024-12-14 12:44:42.755142] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:43.032 [2024-12-14 12:44:42.755239] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.032 [2024-12-14 12:44:42.755293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.032 [2024-12-14 12:44:42.755314] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.292 "name": "raid_bdev1", 00:18:43.292 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:43.292 "strip_size_kb": 0, 00:18:43.292 "state": "online", 00:18:43.292 "raid_level": "raid1", 00:18:43.292 "superblock": true, 00:18:43.292 "num_base_bdevs": 2, 00:18:43.292 "num_base_bdevs_discovered": 1, 00:18:43.292 "num_base_bdevs_operational": 1, 00:18:43.292 "base_bdevs_list": [ 00:18:43.292 { 00:18:43.292 "name": null, 00:18:43.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.292 "is_configured": false, 00:18:43.292 "data_offset": 0, 00:18:43.292 "data_size": 7936 00:18:43.292 }, 00:18:43.292 { 00:18:43.292 "name": "BaseBdev2", 00:18:43.292 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:43.292 "is_configured": true, 00:18:43.292 "data_offset": 256, 00:18:43.292 "data_size": 7936 00:18:43.292 } 00:18:43.292 ] 00:18:43.292 }' 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.292 12:44:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.551 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.811 "name": "raid_bdev1", 00:18:43.811 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:43.811 "strip_size_kb": 0, 00:18:43.811 "state": "online", 00:18:43.811 "raid_level": "raid1", 00:18:43.811 "superblock": true, 00:18:43.811 "num_base_bdevs": 2, 00:18:43.811 "num_base_bdevs_discovered": 1, 00:18:43.811 "num_base_bdevs_operational": 1, 00:18:43.811 "base_bdevs_list": [ 00:18:43.811 { 00:18:43.811 "name": null, 00:18:43.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.811 "is_configured": false, 00:18:43.811 "data_offset": 0, 00:18:43.811 "data_size": 7936 00:18:43.811 }, 00:18:43.811 { 00:18:43.811 "name": "BaseBdev2", 00:18:43.811 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:43.811 "is_configured": true, 00:18:43.811 "data_offset": 256, 00:18:43.811 "data_size": 7936 00:18:43.811 } 00:18:43.811 ] 00:18:43.811 }' 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.811 [2024-12-14 12:44:43.400634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:43.811 [2024-12-14 12:44:43.400690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:43.811 [2024-12-14 12:44:43.400711] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:43.811 [2024-12-14 12:44:43.400719] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:43.811 [2024-12-14 12:44:43.400894] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:43.811 [2024-12-14 12:44:43.400908] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:43.811 [2024-12-14 12:44:43.400956] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:43.811 [2024-12-14 12:44:43.400970] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:43.811 [2024-12-14 12:44:43.400979] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:43.811 [2024-12-14 12:44:43.400988] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:43.811 BaseBdev1 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.811 12:44:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:44.750 "name": "raid_bdev1", 00:18:44.750 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:44.750 "strip_size_kb": 0, 00:18:44.750 "state": "online", 00:18:44.750 "raid_level": "raid1", 00:18:44.750 "superblock": true, 00:18:44.750 "num_base_bdevs": 2, 00:18:44.750 "num_base_bdevs_discovered": 1, 00:18:44.750 "num_base_bdevs_operational": 1, 00:18:44.750 "base_bdevs_list": [ 00:18:44.750 { 00:18:44.750 "name": null, 00:18:44.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.750 "is_configured": false, 00:18:44.750 "data_offset": 0, 00:18:44.750 "data_size": 7936 00:18:44.750 }, 00:18:44.750 { 00:18:44.750 "name": "BaseBdev2", 00:18:44.750 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:44.750 "is_configured": true, 00:18:44.750 "data_offset": 256, 00:18:44.750 "data_size": 7936 00:18:44.750 } 00:18:44.750 ] 00:18:44.750 }' 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:44.750 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.319 "name": "raid_bdev1", 00:18:45.319 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:45.319 "strip_size_kb": 0, 00:18:45.319 "state": "online", 00:18:45.319 "raid_level": "raid1", 00:18:45.319 "superblock": true, 00:18:45.319 "num_base_bdevs": 2, 00:18:45.319 "num_base_bdevs_discovered": 1, 00:18:45.319 "num_base_bdevs_operational": 1, 00:18:45.319 "base_bdevs_list": [ 00:18:45.319 { 00:18:45.319 "name": null, 00:18:45.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.319 "is_configured": false, 00:18:45.319 "data_offset": 0, 00:18:45.319 "data_size": 7936 00:18:45.319 }, 00:18:45.319 { 00:18:45.319 "name": "BaseBdev2", 00:18:45.319 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:45.319 "is_configured": true, 00:18:45.319 "data_offset": 256, 00:18:45.319 "data_size": 7936 00:18:45.319 } 00:18:45.319 ] 00:18:45.319 }' 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.319 12:44:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.319 [2024-12-14 12:44:45.026752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.319 [2024-12-14 12:44:45.026919] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.319 [2024-12-14 12:44:45.026936] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:45.319 request: 00:18:45.319 { 00:18:45.319 "base_bdev": "BaseBdev1", 00:18:45.319 "raid_bdev": "raid_bdev1", 00:18:45.319 "method": "bdev_raid_add_base_bdev", 00:18:45.319 "req_id": 1 00:18:45.319 } 00:18:45.319 Got JSON-RPC error response 00:18:45.319 response: 00:18:45.319 { 00:18:45.319 "code": -22, 00:18:45.319 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:45.319 } 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:45.319 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:45.320 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.320 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.320 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.320 12:44:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.699 "name": "raid_bdev1", 00:18:46.699 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:46.699 "strip_size_kb": 0, 00:18:46.699 "state": "online", 00:18:46.699 "raid_level": "raid1", 00:18:46.699 "superblock": true, 00:18:46.699 "num_base_bdevs": 2, 00:18:46.699 "num_base_bdevs_discovered": 1, 00:18:46.699 "num_base_bdevs_operational": 1, 00:18:46.699 "base_bdevs_list": [ 00:18:46.699 { 00:18:46.699 "name": null, 00:18:46.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.699 "is_configured": false, 00:18:46.699 "data_offset": 0, 00:18:46.699 "data_size": 7936 00:18:46.699 }, 00:18:46.699 { 00:18:46.699 "name": "BaseBdev2", 00:18:46.699 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:46.699 "is_configured": true, 00:18:46.699 "data_offset": 256, 00:18:46.699 "data_size": 7936 00:18:46.699 } 00:18:46.699 ] 00:18:46.699 }' 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.699 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:46.959 "name": "raid_bdev1", 00:18:46.959 "uuid": "594e4990-fd49-4768-8fde-d0e6b4434215", 00:18:46.959 "strip_size_kb": 0, 00:18:46.959 "state": "online", 00:18:46.959 "raid_level": "raid1", 00:18:46.959 "superblock": true, 00:18:46.959 "num_base_bdevs": 2, 00:18:46.959 "num_base_bdevs_discovered": 1, 00:18:46.959 "num_base_bdevs_operational": 1, 00:18:46.959 "base_bdevs_list": [ 00:18:46.959 { 00:18:46.959 "name": null, 00:18:46.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.959 "is_configured": false, 00:18:46.959 "data_offset": 0, 00:18:46.959 "data_size": 7936 00:18:46.959 }, 00:18:46.959 { 00:18:46.959 "name": "BaseBdev2", 00:18:46.959 "uuid": "759cb092-da56-5d0b-9606-86c64cf5c69b", 00:18:46.959 "is_configured": true, 00:18:46.959 "data_offset": 256, 00:18:46.959 "data_size": 7936 00:18:46.959 } 00:18:46.959 ] 00:18:46.959 }' 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 90774 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 90774 ']' 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 90774 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:46.959 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90774 00:18:46.960 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:46.960 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:46.960 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90774' 00:18:46.960 killing process with pid 90774 00:18:46.960 Received shutdown signal, test time was about 60.000000 seconds 00:18:46.960 00:18:46.960 Latency(us) 00:18:46.960 [2024-12-14T12:44:46.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.960 [2024-12-14T12:44:46.698Z] =================================================================================================================== 00:18:46.960 [2024-12-14T12:44:46.698Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:46.960 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 90774 00:18:46.960 [2024-12-14 12:44:46.617970] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:46.960 [2024-12-14 12:44:46.618103] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.960 [2024-12-14 12:44:46.618151] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.960 [2024-12-14 12:44:46.618162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:46.960 12:44:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 90774 00:18:47.219 [2024-12-14 12:44:46.904364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.600 12:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:48.600 00:18:48.600 real 0m17.325s 00:18:48.600 user 0m22.535s 00:18:48.600 sys 0m1.694s 00:18:48.600 12:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.600 ************************************ 00:18:48.600 END TEST raid_rebuild_test_sb_md_interleaved 00:18:48.600 ************************************ 00:18:48.600 12:44:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.600 12:44:48 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:48.600 12:44:48 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:48.600 12:44:48 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 90774 ']' 00:18:48.600 12:44:48 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 90774 00:18:48.600 12:44:48 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:48.600 ************************************ 00:18:48.600 END TEST bdev_raid 00:18:48.600 ************************************ 00:18:48.600 00:18:48.600 real 11m52.887s 00:18:48.600 user 16m6.342s 00:18:48.600 sys 1m48.568s 00:18:48.600 12:44:48 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.600 12:44:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.600 12:44:48 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:48.600 12:44:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:48.600 12:44:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.600 12:44:48 -- common/autotest_common.sh@10 -- # set +x 00:18:48.600 ************************************ 00:18:48.600 START TEST spdkcli_raid 00:18:48.600 ************************************ 00:18:48.600 12:44:48 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:48.600 * Looking for test storage... 00:18:48.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:48.600 12:44:48 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:48.600 12:44:48 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:48.601 12:44:48 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:48.601 12:44:48 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:48.601 12:44:48 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:48.601 12:44:48 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:48.601 12:44:48 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:48.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.601 --rc genhtml_branch_coverage=1 00:18:48.601 --rc genhtml_function_coverage=1 00:18:48.601 --rc genhtml_legend=1 00:18:48.601 --rc geninfo_all_blocks=1 00:18:48.601 --rc geninfo_unexecuted_blocks=1 00:18:48.601 00:18:48.601 ' 00:18:48.601 12:44:48 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:48.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.601 --rc genhtml_branch_coverage=1 00:18:48.601 --rc genhtml_function_coverage=1 00:18:48.601 --rc genhtml_legend=1 00:18:48.601 --rc geninfo_all_blocks=1 00:18:48.601 --rc geninfo_unexecuted_blocks=1 00:18:48.601 00:18:48.601 ' 00:18:48.601 12:44:48 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:48.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.601 --rc genhtml_branch_coverage=1 00:18:48.601 --rc genhtml_function_coverage=1 00:18:48.601 --rc genhtml_legend=1 00:18:48.601 --rc geninfo_all_blocks=1 00:18:48.601 --rc geninfo_unexecuted_blocks=1 00:18:48.601 00:18:48.601 ' 00:18:48.601 12:44:48 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:48.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:48.601 --rc genhtml_branch_coverage=1 00:18:48.601 --rc genhtml_function_coverage=1 00:18:48.601 --rc genhtml_legend=1 00:18:48.601 --rc geninfo_all_blocks=1 00:18:48.601 --rc geninfo_unexecuted_blocks=1 00:18:48.601 00:18:48.601 ' 00:18:48.601 12:44:48 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:48.601 12:44:48 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:48.601 12:44:48 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:48.601 12:44:48 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:48.601 12:44:48 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:48.601 12:44:48 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:48.601 12:44:48 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:48.861 12:44:48 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:48.861 12:44:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=91451 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:48.861 12:44:48 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 91451 00:18:48.861 12:44:48 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 91451 ']' 00:18:48.861 12:44:48 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:48.861 12:44:48 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:48.861 12:44:48 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:48.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:48.861 12:44:48 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:48.861 12:44:48 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.861 [2024-12-14 12:44:48.455111] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:48.861 [2024-12-14 12:44:48.455671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91451 ] 00:18:49.120 [2024-12-14 12:44:48.630206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:49.120 [2024-12-14 12:44:48.741573] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.120 [2024-12-14 12:44:48.741622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.058 12:44:49 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.058 12:44:49 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:50.058 12:44:49 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:50.058 12:44:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.058 12:44:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.058 12:44:49 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:50.058 12:44:49 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.058 12:44:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.058 12:44:49 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:50.058 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:50.058 ' 00:18:51.437 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:51.437 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:51.697 12:44:51 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:51.697 12:44:51 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.697 12:44:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.697 12:44:51 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:51.697 12:44:51 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:51.697 12:44:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.697 12:44:51 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:51.697 ' 00:18:52.635 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:52.894 12:44:52 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:52.894 12:44:52 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.894 12:44:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.894 12:44:52 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:52.894 12:44:52 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.894 12:44:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.894 12:44:52 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:52.894 12:44:52 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:53.463 12:44:53 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:53.463 12:44:53 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:53.463 12:44:53 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:53.463 12:44:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.463 12:44:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.463 12:44:53 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:53.463 12:44:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.463 12:44:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.463 12:44:53 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:53.463 ' 00:18:54.400 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:54.659 12:44:54 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:54.659 12:44:54 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.659 12:44:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.659 12:44:54 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:54.659 12:44:54 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.659 12:44:54 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:54.659 12:44:54 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:54.659 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:54.659 ' 00:18:56.036 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:56.036 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:56.036 12:44:55 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:56.036 12:44:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.036 12:44:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.295 12:44:55 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 91451 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 91451 ']' 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 91451 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91451 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.295 killing process with pid 91451 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91451' 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 91451 00:18:56.295 12:44:55 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 91451 00:18:58.862 12:44:58 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:58.862 12:44:58 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 91451 ']' 00:18:58.862 12:44:58 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 91451 00:18:58.862 12:44:58 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 91451 ']' 00:18:58.862 Process with pid 91451 is not found 00:18:58.862 12:44:58 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 91451 00:18:58.862 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (91451) - No such process 00:18:58.862 12:44:58 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 91451 is not found' 00:18:58.862 12:44:58 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:58.862 12:44:58 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:58.862 12:44:58 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:58.862 12:44:58 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:58.862 ************************************ 00:18:58.862 END TEST spdkcli_raid 00:18:58.862 ************************************ 00:18:58.862 00:18:58.862 real 0m10.016s 00:18:58.862 user 0m20.662s 00:18:58.862 sys 0m1.145s 00:18:58.862 12:44:58 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.862 12:44:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.862 12:44:58 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:58.862 12:44:58 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.862 12:44:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.862 12:44:58 -- common/autotest_common.sh@10 -- # set +x 00:18:58.862 ************************************ 00:18:58.862 START TEST blockdev_raid5f 00:18:58.862 ************************************ 00:18:58.862 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:58.862 * Looking for test storage... 00:18:58.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:58.862 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:58.862 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:58.862 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:58.862 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:58.862 12:44:58 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:58.862 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:58.862 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:58.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.862 --rc genhtml_branch_coverage=1 00:18:58.862 --rc genhtml_function_coverage=1 00:18:58.862 --rc genhtml_legend=1 00:18:58.863 --rc geninfo_all_blocks=1 00:18:58.863 --rc geninfo_unexecuted_blocks=1 00:18:58.863 00:18:58.863 ' 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:58.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.863 --rc genhtml_branch_coverage=1 00:18:58.863 --rc genhtml_function_coverage=1 00:18:58.863 --rc genhtml_legend=1 00:18:58.863 --rc geninfo_all_blocks=1 00:18:58.863 --rc geninfo_unexecuted_blocks=1 00:18:58.863 00:18:58.863 ' 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:58.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.863 --rc genhtml_branch_coverage=1 00:18:58.863 --rc genhtml_function_coverage=1 00:18:58.863 --rc genhtml_legend=1 00:18:58.863 --rc geninfo_all_blocks=1 00:18:58.863 --rc geninfo_unexecuted_blocks=1 00:18:58.863 00:18:58.863 ' 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:58.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:58.863 --rc genhtml_branch_coverage=1 00:18:58.863 --rc genhtml_function_coverage=1 00:18:58.863 --rc genhtml_legend=1 00:18:58.863 --rc geninfo_all_blocks=1 00:18:58.863 --rc geninfo_unexecuted_blocks=1 00:18:58.863 00:18:58.863 ' 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=91726 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 91726 00:18:58.863 12:44:58 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 91726 ']' 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.863 12:44:58 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:58.863 [2024-12-14 12:44:58.522973] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:18:58.863 [2024-12-14 12:44:58.523559] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91726 ] 00:18:59.142 [2024-12-14 12:44:58.701291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.142 [2024-12-14 12:44:58.806928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.120 Malloc0 00:19:00.120 Malloc1 00:19:00.120 Malloc2 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:00.120 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.120 12:44:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.380 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:00.380 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "14cb80bc-e2c1-4157-9f4c-e79a75c041e8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "14cb80bc-e2c1-4157-9f4c-e79a75c041e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "14cb80bc-e2c1-4157-9f4c-e79a75c041e8",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0919b136-7cc8-43a4-a699-9943b6575302",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "1023161c-d551-4351-839b-1257ba88551c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d124fab4-9215-42cd-879e-a38ef0df52cb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:00.380 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:00.380 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:00.380 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:00.380 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:00.380 12:44:59 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 91726 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 91726 ']' 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 91726 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91726 00:19:00.380 killing process with pid 91726 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91726' 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 91726 00:19:00.380 12:44:59 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 91726 00:19:02.919 12:45:02 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:02.919 12:45:02 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:02.919 12:45:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:02.919 12:45:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:02.919 12:45:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:02.919 ************************************ 00:19:02.919 START TEST bdev_hello_world 00:19:02.919 ************************************ 00:19:02.919 12:45:02 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:02.919 [2024-12-14 12:45:02.600297] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:02.919 [2024-12-14 12:45:02.600395] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91793 ] 00:19:03.178 [2024-12-14 12:45:02.775338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.178 [2024-12-14 12:45:02.881279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.748 [2024-12-14 12:45:03.396866] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:03.748 [2024-12-14 12:45:03.396997] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:03.748 [2024-12-14 12:45:03.397033] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:03.748 [2024-12-14 12:45:03.397522] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:03.748 [2024-12-14 12:45:03.397645] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:03.748 [2024-12-14 12:45:03.397661] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:03.748 [2024-12-14 12:45:03.397710] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:03.748 00:19:03.748 [2024-12-14 12:45:03.397727] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:05.130 00:19:05.130 real 0m2.203s 00:19:05.130 user 0m1.846s 00:19:05.130 sys 0m0.235s 00:19:05.130 12:45:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:05.130 12:45:04 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:05.130 ************************************ 00:19:05.130 END TEST bdev_hello_world 00:19:05.130 ************************************ 00:19:05.130 12:45:04 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:05.130 12:45:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:05.130 12:45:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:05.130 12:45:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:05.130 ************************************ 00:19:05.130 START TEST bdev_bounds 00:19:05.130 ************************************ 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=91834 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 91834' 00:19:05.130 Process bdevio pid: 91834 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 91834 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 91834 ']' 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.130 12:45:04 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:05.389 [2024-12-14 12:45:04.871034] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:05.389 [2024-12-14 12:45:04.871232] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91834 ] 00:19:05.389 [2024-12-14 12:45:05.044723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:05.648 [2024-12-14 12:45:05.153784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.648 [2024-12-14 12:45:05.153926] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.648 [2024-12-14 12:45:05.153962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:06.216 12:45:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.217 12:45:05 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:06.217 12:45:05 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:06.217 I/O targets: 00:19:06.217 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:06.217 00:19:06.217 00:19:06.217 CUnit - A unit testing framework for C - Version 2.1-3 00:19:06.217 http://cunit.sourceforge.net/ 00:19:06.217 00:19:06.217 00:19:06.217 Suite: bdevio tests on: raid5f 00:19:06.217 Test: blockdev write read block ...passed 00:19:06.217 Test: blockdev write zeroes read block ...passed 00:19:06.217 Test: blockdev write zeroes read no split ...passed 00:19:06.217 Test: blockdev write zeroes read split ...passed 00:19:06.477 Test: blockdev write zeroes read split partial ...passed 00:19:06.477 Test: blockdev reset ...passed 00:19:06.477 Test: blockdev write read 8 blocks ...passed 00:19:06.477 Test: blockdev write read size > 128k ...passed 00:19:06.477 Test: blockdev write read invalid size ...passed 00:19:06.477 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:06.477 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:06.477 Test: blockdev write read max offset ...passed 00:19:06.477 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:06.477 Test: blockdev writev readv 8 blocks ...passed 00:19:06.477 Test: blockdev writev readv 30 x 1block ...passed 00:19:06.477 Test: blockdev writev readv block ...passed 00:19:06.477 Test: blockdev writev readv size > 128k ...passed 00:19:06.477 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:06.477 Test: blockdev comparev and writev ...passed 00:19:06.477 Test: blockdev nvme passthru rw ...passed 00:19:06.477 Test: blockdev nvme passthru vendor specific ...passed 00:19:06.477 Test: blockdev nvme admin passthru ...passed 00:19:06.477 Test: blockdev copy ...passed 00:19:06.477 00:19:06.477 Run Summary: Type Total Ran Passed Failed Inactive 00:19:06.477 suites 1 1 n/a 0 0 00:19:06.477 tests 23 23 23 0 0 00:19:06.477 asserts 130 130 130 0 n/a 00:19:06.477 00:19:06.477 Elapsed time = 0.616 seconds 00:19:06.477 0 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 91834 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 91834 ']' 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 91834 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91834 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91834' 00:19:06.477 killing process with pid 91834 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 91834 00:19:06.477 12:45:06 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 91834 00:19:07.857 12:45:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:07.857 00:19:07.857 real 0m2.662s 00:19:07.857 user 0m6.622s 00:19:07.857 sys 0m0.355s 00:19:07.857 12:45:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.857 12:45:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:07.857 ************************************ 00:19:07.857 END TEST bdev_bounds 00:19:07.857 ************************************ 00:19:07.857 12:45:07 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:07.857 12:45:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:07.857 12:45:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.857 12:45:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:07.857 ************************************ 00:19:07.857 START TEST bdev_nbd 00:19:07.857 ************************************ 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=91895 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 91895 /var/tmp/spdk-nbd.sock 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 91895 ']' 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:07.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.857 12:45:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:08.117 [2024-12-14 12:45:07.610571] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:08.117 [2024-12-14 12:45:07.611142] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.117 [2024-12-14 12:45:07.783417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.376 [2024-12-14 12:45:07.891824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:08.946 1+0 records in 00:19:08.946 1+0 records out 00:19:08.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322999 s, 12.7 MB/s 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:08.946 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:09.206 { 00:19:09.206 "nbd_device": "/dev/nbd0", 00:19:09.206 "bdev_name": "raid5f" 00:19:09.206 } 00:19:09.206 ]' 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:09.206 { 00:19:09.206 "nbd_device": "/dev/nbd0", 00:19:09.206 "bdev_name": "raid5f" 00:19:09.206 } 00:19:09.206 ]' 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:09.206 12:45:08 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.466 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.725 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:09.985 /dev/nbd0 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:09.985 1+0 records in 00:19:09.985 1+0 records out 00:19:09.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427597 s, 9.6 MB/s 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.985 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:10.245 { 00:19:10.245 "nbd_device": "/dev/nbd0", 00:19:10.245 "bdev_name": "raid5f" 00:19:10.245 } 00:19:10.245 ]' 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:10.245 { 00:19:10.245 "nbd_device": "/dev/nbd0", 00:19:10.245 "bdev_name": "raid5f" 00:19:10.245 } 00:19:10.245 ]' 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:10.245 256+0 records in 00:19:10.245 256+0 records out 00:19:10.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142689 s, 73.5 MB/s 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:10.245 256+0 records in 00:19:10.245 256+0 records out 00:19:10.245 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0313911 s, 33.4 MB/s 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:10.245 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:10.504 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:10.504 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:10.504 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.504 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:10.504 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.504 12:45:09 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:10.504 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.505 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.763 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:10.763 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:10.763 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:10.763 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:10.763 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:10.763 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:10.763 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:10.764 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:11.023 malloc_lvol_verify 00:19:11.023 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:11.283 0b810027-717d-45e5-9936-c7f03c79e111 00:19:11.283 12:45:10 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:11.542 a44d9c86-5da8-4856-865d-246e03e6de12 00:19:11.542 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:11.542 /dev/nbd0 00:19:11.542 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:11.542 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:11.542 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:11.542 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:11.542 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:11.542 mke2fs 1.47.0 (5-Feb-2023) 00:19:11.542 Discarding device blocks: 0/4096 done 00:19:11.542 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:11.542 00:19:11.802 Allocating group tables: 0/1 done 00:19:11.802 Writing inode tables: 0/1 done 00:19:11.802 Creating journal (1024 blocks): done 00:19:11.802 Writing superblocks and filesystem accounting information: 0/1 done 00:19:11.802 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 91895 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 91895 ']' 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 91895 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91895 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91895' 00:19:11.802 killing process with pid 91895 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 91895 00:19:11.802 12:45:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 91895 00:19:13.713 12:45:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:13.713 00:19:13.713 real 0m5.410s 00:19:13.713 user 0m7.292s 00:19:13.713 sys 0m1.270s 00:19:13.713 12:45:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.713 12:45:12 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:13.713 ************************************ 00:19:13.713 END TEST bdev_nbd 00:19:13.713 ************************************ 00:19:13.713 12:45:12 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:13.713 12:45:12 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:13.713 12:45:12 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:13.713 12:45:12 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:13.713 12:45:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.713 12:45:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.713 12:45:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:13.713 ************************************ 00:19:13.713 START TEST bdev_fio 00:19:13.713 ************************************ 00:19:13.713 12:45:12 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:13.713 12:45:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:13.713 12:45:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:13.713 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:13.713 12:45:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:13.713 12:45:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:13.713 12:45:12 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:13.713 ************************************ 00:19:13.713 START TEST bdev_fio_rw_verify 00:19:13.713 ************************************ 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:13.713 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:13.714 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:13.714 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:13.714 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:13.714 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:13.714 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:13.714 12:45:13 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:13.714 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:13.714 fio-3.35 00:19:13.714 Starting 1 thread 00:19:25.935 00:19:25.935 job_raid5f: (groupid=0, jobs=1): err= 0: pid=92099: Sat Dec 14 12:45:24 2024 00:19:25.935 read: IOPS=12.0k, BW=47.1MiB/s (49.3MB/s)(471MiB/10001msec) 00:19:25.935 slat (nsec): min=17676, max=58660, avg=20169.38, stdev=2078.96 00:19:25.935 clat (usec): min=9, max=743, avg=132.98, stdev=48.23 00:19:25.935 lat (usec): min=29, max=771, avg=153.15, stdev=48.48 00:19:25.935 clat percentiles (usec): 00:19:25.935 | 50.000th=[ 133], 99.000th=[ 223], 99.900th=[ 255], 99.990th=[ 293], 00:19:25.935 | 99.999th=[ 717] 00:19:25.935 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(489MiB/9872msec); 0 zone resets 00:19:25.935 slat (usec): min=7, max=263, avg=16.69, stdev= 3.55 00:19:25.935 clat (usec): min=58, max=1646, avg=302.00, stdev=42.78 00:19:25.935 lat (usec): min=73, max=1909, avg=318.68, stdev=43.85 00:19:25.935 clat percentiles (usec): 00:19:25.935 | 50.000th=[ 306], 99.000th=[ 400], 99.900th=[ 578], 99.990th=[ 1037], 00:19:25.935 | 99.999th=[ 1565] 00:19:25.935 bw ( KiB/s): min=47336, max=53216, per=98.48%, avg=49929.68, stdev=1610.44, samples=19 00:19:25.935 iops : min=11834, max=13304, avg=12482.42, stdev=402.61, samples=19 00:19:25.935 lat (usec) : 10=0.01%, 20=0.01%, 50=0.01%, 100=15.61%, 250=39.54% 00:19:25.935 lat (usec) : 500=44.78%, 750=0.05%, 1000=0.02% 00:19:25.935 lat (msec) : 2=0.01% 00:19:25.935 cpu : usr=99.08%, sys=0.37%, ctx=30, majf=0, minf=9923 00:19:25.935 IO depths : 1=7.6%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:25.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.935 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.935 issued rwts: total=120486,125124,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.935 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:25.935 00:19:25.935 Run status group 0 (all jobs): 00:19:25.935 READ: bw=47.1MiB/s (49.3MB/s), 47.1MiB/s-47.1MiB/s (49.3MB/s-49.3MB/s), io=471MiB (494MB), run=10001-10001msec 00:19:25.935 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=489MiB (513MB), run=9872-9872msec 00:19:26.195 ----------------------------------------------------- 00:19:26.195 Suppressions used: 00:19:26.195 count bytes template 00:19:26.195 1 7 /usr/src/fio/parse.c 00:19:26.195 887 85152 /usr/src/fio/iolog.c 00:19:26.195 1 8 libtcmalloc_minimal.so 00:19:26.195 1 904 libcrypto.so 00:19:26.195 ----------------------------------------------------- 00:19:26.195 00:19:26.195 00:19:26.195 real 0m12.703s 00:19:26.195 user 0m12.938s 00:19:26.195 sys 0m0.581s 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:26.195 ************************************ 00:19:26.195 END TEST bdev_fio_rw_verify 00:19:26.195 ************************************ 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:26.195 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "14cb80bc-e2c1-4157-9f4c-e79a75c041e8"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "14cb80bc-e2c1-4157-9f4c-e79a75c041e8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "14cb80bc-e2c1-4157-9f4c-e79a75c041e8",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "0919b136-7cc8-43a4-a699-9943b6575302",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "1023161c-d551-4351-839b-1257ba88551c",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "d124fab4-9215-42cd-879e-a38ef0df52cb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:26.456 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:26.456 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:26.456 /home/vagrant/spdk_repo/spdk 00:19:26.456 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:26.456 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:26.456 12:45:25 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:26.456 00:19:26.456 real 0m12.979s 00:19:26.456 user 0m13.060s 00:19:26.456 sys 0m0.707s 00:19:26.456 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.456 12:45:25 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:26.456 ************************************ 00:19:26.456 END TEST bdev_fio 00:19:26.456 ************************************ 00:19:26.456 12:45:26 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:26.456 12:45:26 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:26.456 12:45:26 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:26.456 12:45:26 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:26.456 12:45:26 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:26.456 ************************************ 00:19:26.456 START TEST bdev_verify 00:19:26.456 ************************************ 00:19:26.456 12:45:26 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:26.456 [2024-12-14 12:45:26.127100] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:26.456 [2024-12-14 12:45:26.127203] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92257 ] 00:19:26.716 [2024-12-14 12:45:26.299577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:26.716 [2024-12-14 12:45:26.406072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.716 [2024-12-14 12:45:26.406140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:27.284 Running I/O for 5 seconds... 00:19:29.604 16381.00 IOPS, 63.99 MiB/s [2024-12-14T12:45:30.281Z] 16096.50 IOPS, 62.88 MiB/s [2024-12-14T12:45:31.220Z] 16155.00 IOPS, 63.11 MiB/s [2024-12-14T12:45:32.159Z] 16242.75 IOPS, 63.45 MiB/s [2024-12-14T12:45:32.159Z] 16183.80 IOPS, 63.22 MiB/s 00:19:32.421 Latency(us) 00:19:32.421 [2024-12-14T12:45:32.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.421 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:32.421 Verification LBA range: start 0x0 length 0x2000 00:19:32.421 raid5f : 5.02 8019.86 31.33 0.00 0.00 23997.17 92.12 21978.89 00:19:32.421 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:32.421 Verification LBA range: start 0x2000 length 0x2000 00:19:32.421 raid5f : 5.01 8148.90 31.83 0.00 0.00 23628.92 245.04 21520.99 00:19:32.421 [2024-12-14T12:45:32.159Z] =================================================================================================================== 00:19:32.421 [2024-12-14T12:45:32.159Z] Total : 16168.76 63.16 0.00 0.00 23811.75 92.12 21978.89 00:19:33.802 00:19:33.802 real 0m7.233s 00:19:33.802 user 0m13.406s 00:19:33.802 sys 0m0.265s 00:19:33.802 ************************************ 00:19:33.802 END TEST bdev_verify 00:19:33.802 ************************************ 00:19:33.802 12:45:33 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.802 12:45:33 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 12:45:33 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:33.802 12:45:33 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:33.802 12:45:33 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.802 12:45:33 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:33.802 ************************************ 00:19:33.802 START TEST bdev_verify_big_io 00:19:33.802 ************************************ 00:19:33.802 12:45:33 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:33.802 [2024-12-14 12:45:33.432270] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:33.802 [2024-12-14 12:45:33.432439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92351 ] 00:19:34.062 [2024-12-14 12:45:33.604295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:34.062 [2024-12-14 12:45:33.712319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.062 [2024-12-14 12:45:33.712365] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.631 Running I/O for 5 seconds... 00:19:36.949 758.00 IOPS, 47.38 MiB/s [2024-12-14T12:45:37.625Z] 792.00 IOPS, 49.50 MiB/s [2024-12-14T12:45:38.564Z] 909.00 IOPS, 56.81 MiB/s [2024-12-14T12:45:39.504Z] 952.00 IOPS, 59.50 MiB/s [2024-12-14T12:45:39.504Z] 1002.60 IOPS, 62.66 MiB/s 00:19:39.766 Latency(us) 00:19:39.766 [2024-12-14T12:45:39.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.766 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:39.766 Verification LBA range: start 0x0 length 0x200 00:19:39.766 raid5f : 5.13 494.83 30.93 0.00 0.00 6392810.94 191.39 305872.82 00:19:39.766 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:39.766 Verification LBA range: start 0x200 length 0x200 00:19:39.766 raid5f : 5.23 509.80 31.86 0.00 0.00 6206194.82 131.47 302209.68 00:19:39.766 [2024-12-14T12:45:39.504Z] =================================================================================================================== 00:19:39.766 [2024-12-14T12:45:39.504Z] Total : 1004.63 62.79 0.00 0.00 6297262.05 131.47 305872.82 00:19:41.147 00:19:41.147 real 0m7.451s 00:19:41.147 user 0m13.870s 00:19:41.147 sys 0m0.239s 00:19:41.147 ************************************ 00:19:41.147 END TEST bdev_verify_big_io 00:19:41.147 ************************************ 00:19:41.147 12:45:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:41.147 12:45:40 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:41.147 12:45:40 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.147 12:45:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:41.147 12:45:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:41.147 12:45:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:41.147 ************************************ 00:19:41.147 START TEST bdev_write_zeroes 00:19:41.147 ************************************ 00:19:41.147 12:45:40 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:41.406 [2024-12-14 12:45:40.951693] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:41.407 [2024-12-14 12:45:40.951797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92451 ] 00:19:41.407 [2024-12-14 12:45:41.123177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.666 [2024-12-14 12:45:41.227277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.244 Running I/O for 1 seconds... 00:19:43.202 29079.00 IOPS, 113.59 MiB/s 00:19:43.202 Latency(us) 00:19:43.202 [2024-12-14T12:45:42.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:43.202 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:43.202 raid5f : 1.01 29050.94 113.48 0.00 0.00 4392.29 1516.77 5924.00 00:19:43.202 [2024-12-14T12:45:42.940Z] =================================================================================================================== 00:19:43.202 [2024-12-14T12:45:42.940Z] Total : 29050.94 113.48 0.00 0.00 4392.29 1516.77 5924.00 00:19:44.584 00:19:44.584 real 0m3.210s 00:19:44.584 user 0m2.839s 00:19:44.584 sys 0m0.244s 00:19:44.584 12:45:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:44.584 12:45:44 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:44.584 ************************************ 00:19:44.584 END TEST bdev_write_zeroes 00:19:44.584 ************************************ 00:19:44.584 12:45:44 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:44.584 12:45:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:44.584 12:45:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:44.584 12:45:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:44.584 ************************************ 00:19:44.584 START TEST bdev_json_nonenclosed 00:19:44.584 ************************************ 00:19:44.584 12:45:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:44.584 [2024-12-14 12:45:44.226967] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:44.584 [2024-12-14 12:45:44.227155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92508 ] 00:19:44.842 [2024-12-14 12:45:44.397898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.842 [2024-12-14 12:45:44.518844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.842 [2024-12-14 12:45:44.519044] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:44.842 [2024-12-14 12:45:44.519121] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:44.842 [2024-12-14 12:45:44.519145] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:45.101 00:19:45.101 real 0m0.613s 00:19:45.101 user 0m0.392s 00:19:45.101 sys 0m0.116s 00:19:45.101 12:45:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.101 12:45:44 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:45.101 ************************************ 00:19:45.101 END TEST bdev_json_nonenclosed 00:19:45.101 ************************************ 00:19:45.101 12:45:44 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:45.101 12:45:44 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:45.101 12:45:44 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:45.101 12:45:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.101 ************************************ 00:19:45.101 START TEST bdev_json_nonarray 00:19:45.101 ************************************ 00:19:45.101 12:45:44 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:45.361 [2024-12-14 12:45:44.908228] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:45.361 [2024-12-14 12:45:44.908435] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92528 ] 00:19:45.361 [2024-12-14 12:45:45.078751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.620 [2024-12-14 12:45:45.186978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:45.620 [2024-12-14 12:45:45.187183] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:45.620 [2024-12-14 12:45:45.187206] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:45.620 [2024-12-14 12:45:45.187227] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:45.879 00:19:45.879 real 0m0.598s 00:19:45.879 user 0m0.378s 00:19:45.879 sys 0m0.116s 00:19:45.879 ************************************ 00:19:45.880 END TEST bdev_json_nonarray 00:19:45.880 ************************************ 00:19:45.880 12:45:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.880 12:45:45 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:45.880 12:45:45 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:45.880 ************************************ 00:19:45.880 END TEST blockdev_raid5f 00:19:45.880 ************************************ 00:19:45.880 00:19:45.880 real 0m47.289s 00:19:45.880 user 1m4.056s 00:19:45.880 sys 0m4.636s 00:19:45.880 12:45:45 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:45.880 12:45:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:45.880 12:45:45 -- spdk/autotest.sh@194 -- # uname -s 00:19:45.880 12:45:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:45.880 12:45:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:45.880 12:45:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:45.880 12:45:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:45.880 12:45:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:45.880 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:19:45.880 12:45:45 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:45.880 12:45:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:45.880 12:45:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:45.880 12:45:45 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:45.880 12:45:45 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:45.880 12:45:45 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:45.880 12:45:45 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:45.880 12:45:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:45.880 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:19:45.880 12:45:45 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:45.880 12:45:45 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:45.880 12:45:45 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:45.880 12:45:45 -- common/autotest_common.sh@10 -- # set +x 00:19:47.786 INFO: APP EXITING 00:19:47.786 INFO: killing all VMs 00:19:47.786 INFO: killing vhost app 00:19:47.786 INFO: EXIT DONE 00:19:48.355 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:48.355 Waiting for block devices as requested 00:19:48.355 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:48.355 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:49.293 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:49.293 Cleaning 00:19:49.293 Removing: /var/run/dpdk/spdk0/config 00:19:49.293 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:49.293 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:49.293 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:49.293 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:49.293 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:49.293 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:49.293 Removing: /dev/shm/spdk_tgt_trace.pid58737 00:19:49.293 Removing: /var/run/dpdk/spdk0 00:19:49.293 Removing: /var/run/dpdk/spdk_pid58502 00:19:49.293 Removing: /var/run/dpdk/spdk_pid58737 00:19:49.293 Removing: /var/run/dpdk/spdk_pid58966 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59070 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59126 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59265 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59283 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59499 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59610 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59723 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59845 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59953 00:19:49.293 Removing: /var/run/dpdk/spdk_pid59992 00:19:49.293 Removing: /var/run/dpdk/spdk_pid60029 00:19:49.293 Removing: /var/run/dpdk/spdk_pid60105 00:19:49.293 Removing: /var/run/dpdk/spdk_pid60222 00:19:49.293 Removing: /var/run/dpdk/spdk_pid60664 00:19:49.293 Removing: /var/run/dpdk/spdk_pid60739 00:19:49.293 Removing: /var/run/dpdk/spdk_pid60815 00:19:49.553 Removing: /var/run/dpdk/spdk_pid60831 00:19:49.553 Removing: /var/run/dpdk/spdk_pid60981 00:19:49.553 Removing: /var/run/dpdk/spdk_pid60999 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61155 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61171 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61240 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61264 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61328 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61346 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61541 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61583 00:19:49.553 Removing: /var/run/dpdk/spdk_pid61671 00:19:49.553 Removing: /var/run/dpdk/spdk_pid63025 00:19:49.553 Removing: /var/run/dpdk/spdk_pid63237 00:19:49.553 Removing: /var/run/dpdk/spdk_pid63377 00:19:49.553 Removing: /var/run/dpdk/spdk_pid64019 00:19:49.553 Removing: /var/run/dpdk/spdk_pid64226 00:19:49.553 Removing: /var/run/dpdk/spdk_pid64372 00:19:49.553 Removing: /var/run/dpdk/spdk_pid65010 00:19:49.553 Removing: /var/run/dpdk/spdk_pid65339 00:19:49.553 Removing: /var/run/dpdk/spdk_pid65480 00:19:49.553 Removing: /var/run/dpdk/spdk_pid66866 00:19:49.553 Removing: /var/run/dpdk/spdk_pid67119 00:19:49.553 Removing: /var/run/dpdk/spdk_pid67265 00:19:49.553 Removing: /var/run/dpdk/spdk_pid68647 00:19:49.553 Removing: /var/run/dpdk/spdk_pid68906 00:19:49.553 Removing: /var/run/dpdk/spdk_pid69046 00:19:49.553 Removing: /var/run/dpdk/spdk_pid70434 00:19:49.553 Removing: /var/run/dpdk/spdk_pid70881 00:19:49.553 Removing: /var/run/dpdk/spdk_pid71031 00:19:49.553 Removing: /var/run/dpdk/spdk_pid72516 00:19:49.553 Removing: /var/run/dpdk/spdk_pid72775 00:19:49.553 Removing: /var/run/dpdk/spdk_pid72922 00:19:49.553 Removing: /var/run/dpdk/spdk_pid74401 00:19:49.553 Removing: /var/run/dpdk/spdk_pid74668 00:19:49.553 Removing: /var/run/dpdk/spdk_pid74817 00:19:49.553 Removing: /var/run/dpdk/spdk_pid76305 00:19:49.553 Removing: /var/run/dpdk/spdk_pid76792 00:19:49.553 Removing: /var/run/dpdk/spdk_pid76938 00:19:49.553 Removing: /var/run/dpdk/spdk_pid77076 00:19:49.553 Removing: /var/run/dpdk/spdk_pid77504 00:19:49.553 Removing: /var/run/dpdk/spdk_pid78228 00:19:49.553 Removing: /var/run/dpdk/spdk_pid78611 00:19:49.553 Removing: /var/run/dpdk/spdk_pid79295 00:19:49.553 Removing: /var/run/dpdk/spdk_pid79736 00:19:49.553 Removing: /var/run/dpdk/spdk_pid80491 00:19:49.553 Removing: /var/run/dpdk/spdk_pid80919 00:19:49.553 Removing: /var/run/dpdk/spdk_pid82879 00:19:49.553 Removing: /var/run/dpdk/spdk_pid83323 00:19:49.553 Removing: /var/run/dpdk/spdk_pid83757 00:19:49.553 Removing: /var/run/dpdk/spdk_pid85851 00:19:49.553 Removing: /var/run/dpdk/spdk_pid86337 00:19:49.553 Removing: /var/run/dpdk/spdk_pid86856 00:19:49.553 Removing: /var/run/dpdk/spdk_pid87916 00:19:49.553 Removing: /var/run/dpdk/spdk_pid88243 00:19:49.553 Removing: /var/run/dpdk/spdk_pid89178 00:19:49.553 Removing: /var/run/dpdk/spdk_pid89507 00:19:49.553 Removing: /var/run/dpdk/spdk_pid90444 00:19:49.553 Removing: /var/run/dpdk/spdk_pid90774 00:19:49.553 Removing: /var/run/dpdk/spdk_pid91451 00:19:49.553 Removing: /var/run/dpdk/spdk_pid91726 00:19:49.553 Removing: /var/run/dpdk/spdk_pid91793 00:19:49.553 Removing: /var/run/dpdk/spdk_pid91834 00:19:49.553 Removing: /var/run/dpdk/spdk_pid92084 00:19:49.813 Removing: /var/run/dpdk/spdk_pid92257 00:19:49.813 Removing: /var/run/dpdk/spdk_pid92351 00:19:49.813 Removing: /var/run/dpdk/spdk_pid92451 00:19:49.813 Removing: /var/run/dpdk/spdk_pid92508 00:19:49.813 Removing: /var/run/dpdk/spdk_pid92528 00:19:49.813 Clean 00:19:49.813 12:45:49 -- common/autotest_common.sh@1453 -- # return 0 00:19:49.813 12:45:49 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:49.813 12:45:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.813 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:19:49.813 12:45:49 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:49.813 12:45:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:49.813 12:45:49 -- common/autotest_common.sh@10 -- # set +x 00:19:49.813 12:45:49 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:49.813 12:45:49 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:49.813 12:45:49 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:49.813 12:45:49 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:49.813 12:45:49 -- spdk/autotest.sh@398 -- # hostname 00:19:49.813 12:45:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:50.073 geninfo: WARNING: invalid characters removed from testname! 00:20:12.025 12:46:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:13.413 12:46:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:15.315 12:46:14 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:17.216 12:46:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:19.125 12:46:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:21.026 12:46:20 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:22.926 12:46:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:22.926 12:46:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:22.926 12:46:22 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:22.926 12:46:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:22.926 12:46:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:23.186 12:46:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:23.186 + [[ -n 5429 ]] 00:20:23.186 + sudo kill 5429 00:20:23.197 [Pipeline] } 00:20:23.212 [Pipeline] // timeout 00:20:23.217 [Pipeline] } 00:20:23.231 [Pipeline] // stage 00:20:23.236 [Pipeline] } 00:20:23.251 [Pipeline] // catchError 00:20:23.260 [Pipeline] stage 00:20:23.262 [Pipeline] { (Stop VM) 00:20:23.274 [Pipeline] sh 00:20:23.556 + vagrant halt 00:20:26.096 ==> default: Halting domain... 00:20:34.223 [Pipeline] sh 00:20:34.510 + vagrant destroy -f 00:20:37.081 ==> default: Removing domain... 00:20:37.093 [Pipeline] sh 00:20:37.373 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:37.381 [Pipeline] } 00:20:37.395 [Pipeline] // stage 00:20:37.402 [Pipeline] } 00:20:37.416 [Pipeline] // dir 00:20:37.421 [Pipeline] } 00:20:37.435 [Pipeline] // wrap 00:20:37.441 [Pipeline] } 00:20:37.453 [Pipeline] // catchError 00:20:37.462 [Pipeline] stage 00:20:37.464 [Pipeline] { (Epilogue) 00:20:37.476 [Pipeline] sh 00:20:37.756 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:43.035 [Pipeline] catchError 00:20:43.038 [Pipeline] { 00:20:43.051 [Pipeline] sh 00:20:43.333 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:43.333 Artifacts sizes are good 00:20:43.341 [Pipeline] } 00:20:43.355 [Pipeline] // catchError 00:20:43.365 [Pipeline] archiveArtifacts 00:20:43.372 Archiving artifacts 00:20:43.467 [Pipeline] cleanWs 00:20:43.478 [WS-CLEANUP] Deleting project workspace... 00:20:43.478 [WS-CLEANUP] Deferred wipeout is used... 00:20:43.484 [WS-CLEANUP] done 00:20:43.486 [Pipeline] } 00:20:43.501 [Pipeline] // stage 00:20:43.506 [Pipeline] } 00:20:43.520 [Pipeline] // node 00:20:43.526 [Pipeline] End of Pipeline 00:20:43.585 Finished: SUCCESS